Test Report: Docker_Linux_containerd_arm64 19712

                    
                      c4dd788a1c1ea09a0f3bb20836a8b75126e684b1:2024-09-27:36398
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 201.04
301 TestStartStop/group/old-k8s-version/serial/SecondStart 379.89
x
+
TestAddons/serial/Volcano (201.04s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 57.06621ms
addons_test.go:843: volcano-admission stabilized in 57.975049ms
addons_test.go:851: volcano-controller stabilized in 58.0304ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-mbvfg" [d6dde1c8-8103-4e54-ab7f-ad51644ba6f4] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003366118s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-dvr69" [a34c89d3-c723-4afa-b4fa-3cad566481ed] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00316728s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-vx57k" [a5e47f78-8e3d-4d02-bc54-d8496aa7ec52] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.023717816s
addons_test.go:870: (dbg) Run:  kubectl --context addons-583947 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-583947 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-583947 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4718477b-7617-4fb3-b681-d310bdc9c1eb] Pending
helpers_test.go:344: "test-job-nginx-0" [4718477b-7617-4fb3-b681-d310bdc9c1eb] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-583947 -n addons-583947
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-27 17:46:46.149879555 +0000 UTC m=+433.716583811
addons_test.go:902: (dbg) Run:  kubectl --context addons-583947 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-583947 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-1d36637a-c994-43d1-85a4-188abdde5044
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-chtfq (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-chtfq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-583947 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-583947 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-583947
helpers_test.go:235: (dbg) docker inspect addons-583947:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492",
	        "Created": "2024-09-27T17:40:13.140348239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T17:40:13.286117289Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492/hostname",
	        "HostsPath": "/var/lib/docker/containers/0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492/hosts",
	        "LogPath": "/var/lib/docker/containers/0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492/0c7a0efa915c7db07956a4b5b0a7fbfc510abab1dc7f899fe6dba94a991c2492-json.log",
	        "Name": "/addons-583947",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-583947:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-583947",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7db4c1cbdc396638accb2b508caa384e31fd4cec50b10bf739f2689c02bc6d10-init/diff:/var/lib/docker/overlay2/a37a697d35bc9dd6b22fe821f055b93d8ecad36dc406167b9eb9ad78951bada0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7db4c1cbdc396638accb2b508caa384e31fd4cec50b10bf739f2689c02bc6d10/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7db4c1cbdc396638accb2b508caa384e31fd4cec50b10bf739f2689c02bc6d10/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7db4c1cbdc396638accb2b508caa384e31fd4cec50b10bf739f2689c02bc6d10/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-583947",
	                "Source": "/var/lib/docker/volumes/addons-583947/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-583947",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-583947",
	                "name.minikube.sigs.k8s.io": "addons-583947",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ebc20ca29f78320e9462590de67f5bbb1f3848bc3eb325afe696cfc859a532c",
	            "SandboxKey": "/var/run/docker/netns/3ebc20ca29f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-583947": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f5eaff548005d97efdfeb19b99273f3ddfd6889279cb6c2789ff3a3c06139775",
	                    "EndpointID": "80dcadf37b610b5f36e9defb9f1d0c3927f4a385f0d918155c14db90981aa22c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-583947",
	                        "0c7a0efa915c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-583947 -n addons-583947
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 logs -n 25: (1.666391823s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-324473   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | -p download-only-324473              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| delete  | -p download-only-324473              | download-only-324473   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| start   | -o=json --download-only              | download-only-496322   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | -p download-only-496322              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| delete  | -p download-only-496322              | download-only-496322   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| delete  | -p download-only-324473              | download-only-324473   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| delete  | -p download-only-496322              | download-only-496322   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| start   | --download-only -p                   | download-docker-268027 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | download-docker-268027               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-268027            | download-docker-268027 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| start   | --download-only -p                   | binary-mirror-691798   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | binary-mirror-691798                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44453               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-691798              | binary-mirror-691798   | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| addons  | enable dashboard -p                  | addons-583947          | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | addons-583947                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-583947          | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | addons-583947                        |                        |         |         |                     |                     |
	| start   | -p addons-583947 --wait=true         | addons-583947          | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:39:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:39:47.975473  300155 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:39:47.975655  300155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:47.975668  300155 out.go:358] Setting ErrFile to fd 2...
	I0927 17:39:47.975676  300155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:47.975942  300155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 17:39:47.976437  300155 out.go:352] Setting JSON to false
	I0927 17:39:47.977480  300155 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4939,"bootTime":1727453849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 17:39:47.977556  300155 start.go:139] virtualization:  
	I0927 17:39:47.979980  300155 out.go:177] * [addons-583947] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 17:39:47.982349  300155 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:39:47.982442  300155 notify.go:220] Checking for updates...
	I0927 17:39:47.986077  300155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:39:47.988031  300155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:39:47.989940  300155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 17:39:47.991613  300155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 17:39:47.993438  300155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:39:47.995725  300155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:39:48.024024  300155 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:39:48.024196  300155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:48.085765  300155 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 17:39:48.076083428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:48.085895  300155 docker.go:318] overlay module found
	I0927 17:39:48.088969  300155 out.go:177] * Using the docker driver based on user configuration
	I0927 17:39:48.090631  300155 start.go:297] selected driver: docker
	I0927 17:39:48.090651  300155 start.go:901] validating driver "docker" against <nil>
	I0927 17:39:48.090667  300155 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:39:48.091429  300155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:48.141226  300155 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 17:39:48.131639441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:48.141526  300155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:39:48.141755  300155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:39:48.143807  300155 out.go:177] * Using Docker driver with root privileges
	I0927 17:39:48.145983  300155 cni.go:84] Creating CNI manager for ""
	I0927 17:39:48.146059  300155 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 17:39:48.146074  300155 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:39:48.146170  300155 start.go:340] cluster config:
	{Name:addons-583947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-583947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:39:48.149294  300155 out.go:177] * Starting "addons-583947" primary control-plane node in "addons-583947" cluster
	I0927 17:39:48.150842  300155 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 17:39:48.152494  300155 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 17:39:48.154011  300155 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 17:39:48.154066  300155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 17:39:48.154080  300155 cache.go:56] Caching tarball of preloaded images
	I0927 17:39:48.154165  300155 preload.go:172] Found /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 17:39:48.154179  300155 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0927 17:39:48.154548  300155 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/config.json ...
	I0927 17:39:48.154579  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/config.json: {Name:mk7cd4ba870d1b8238f50a977a0978625ab10cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:39:48.154756  300155 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 17:39:48.169188  300155 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 17:39:48.169364  300155 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 17:39:48.169389  300155 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 17:39:48.169399  300155 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 17:39:48.169407  300155 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 17:39:48.169416  300155 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 17:40:05.891995  300155 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 17:40:05.892035  300155 cache.go:194] Successfully downloaded all kic artifacts
	I0927 17:40:05.892065  300155 start.go:360] acquireMachinesLock for addons-583947: {Name:mk872710b255576436d83c5540a2b2688217a7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:40:05.893595  300155 start.go:364] duration metric: took 1.501683ms to acquireMachinesLock for "addons-583947"
	I0927 17:40:05.893643  300155 start.go:93] Provisioning new machine with config: &{Name:addons-583947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-583947 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 17:40:05.893738  300155 start.go:125] createHost starting for "" (driver="docker")
	I0927 17:40:05.897353  300155 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 17:40:05.897623  300155 start.go:159] libmachine.API.Create for "addons-583947" (driver="docker")
	I0927 17:40:05.897668  300155 client.go:168] LocalClient.Create starting
	I0927 17:40:05.897787  300155 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem
	I0927 17:40:06.376339  300155 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem
	I0927 17:40:06.720773  300155 cli_runner.go:164] Run: docker network inspect addons-583947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 17:40:06.736503  300155 cli_runner.go:211] docker network inspect addons-583947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 17:40:06.736608  300155 network_create.go:284] running [docker network inspect addons-583947] to gather additional debugging logs...
	I0927 17:40:06.736631  300155 cli_runner.go:164] Run: docker network inspect addons-583947
	W0927 17:40:06.752065  300155 cli_runner.go:211] docker network inspect addons-583947 returned with exit code 1
	I0927 17:40:06.752106  300155 network_create.go:287] error running [docker network inspect addons-583947]: docker network inspect addons-583947: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-583947 not found
	I0927 17:40:06.752122  300155 network_create.go:289] output of [docker network inspect addons-583947]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-583947 not found
	
	** /stderr **
	I0927 17:40:06.752242  300155 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 17:40:06.768193  300155 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cac7a0}
	I0927 17:40:06.768240  300155 network_create.go:124] attempt to create docker network addons-583947 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 17:40:06.768300  300155 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-583947 addons-583947
	I0927 17:40:06.837360  300155 network_create.go:108] docker network addons-583947 192.168.49.0/24 created
	I0927 17:40:06.837394  300155 kic.go:121] calculated static IP "192.168.49.2" for the "addons-583947" container
	I0927 17:40:06.837473  300155 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 17:40:06.853226  300155 cli_runner.go:164] Run: docker volume create addons-583947 --label name.minikube.sigs.k8s.io=addons-583947 --label created_by.minikube.sigs.k8s.io=true
	I0927 17:40:06.871620  300155 oci.go:103] Successfully created a docker volume addons-583947
	I0927 17:40:06.871715  300155 cli_runner.go:164] Run: docker run --rm --name addons-583947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583947 --entrypoint /usr/bin/test -v addons-583947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 17:40:09.000455  300155 cli_runner.go:217] Completed: docker run --rm --name addons-583947-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583947 --entrypoint /usr/bin/test -v addons-583947:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.128690433s)
	I0927 17:40:09.000485  300155 oci.go:107] Successfully prepared a docker volume addons-583947
	I0927 17:40:09.000515  300155 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 17:40:09.000535  300155 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 17:40:09.000600  300155 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-583947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 17:40:13.073532  300155 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-583947:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.072891209s)
	I0927 17:40:13.073565  300155 kic.go:203] duration metric: took 4.073026213s to extract preloaded images to volume ...
	W0927 17:40:13.073720  300155 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 17:40:13.073838  300155 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 17:40:13.125712  300155 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-583947 --name addons-583947 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-583947 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-583947 --network addons-583947 --ip 192.168.49.2 --volume addons-583947:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 17:40:13.444141  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Running}}
	I0927 17:40:13.477784  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:13.499144  300155 cli_runner.go:164] Run: docker exec addons-583947 stat /var/lib/dpkg/alternatives/iptables
	I0927 17:40:13.559280  300155 oci.go:144] the created container "addons-583947" has a running status.
	I0927 17:40:13.559321  300155 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa...
	I0927 17:40:14.753440  300155 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 17:40:14.772201  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:14.789922  300155 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 17:40:14.789944  300155 kic_runner.go:114] Args: [docker exec --privileged addons-583947 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 17:40:14.833390  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:14.852093  300155 machine.go:93] provisionDockerMachine start ...
	I0927 17:40:14.852197  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:14.868592  300155 main.go:141] libmachine: Using SSH client type: native
	I0927 17:40:14.868863  300155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0927 17:40:14.868873  300155 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 17:40:14.996830  300155 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583947
	
	I0927 17:40:14.996854  300155 ubuntu.go:169] provisioning hostname "addons-583947"
	I0927 17:40:14.996922  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:15.014064  300155 main.go:141] libmachine: Using SSH client type: native
	I0927 17:40:15.014321  300155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0927 17:40:15.014341  300155 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-583947 && echo "addons-583947" | sudo tee /etc/hostname
	I0927 17:40:15.180375  300155 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-583947
	
	I0927 17:40:15.180457  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:15.198303  300155 main.go:141] libmachine: Using SSH client type: native
	I0927 17:40:15.198570  300155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0927 17:40:15.198593  300155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-583947' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-583947/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-583947' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:40:15.329337  300155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:40:15.329364  300155 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19712-294006/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-294006/.minikube}
	I0927 17:40:15.329389  300155 ubuntu.go:177] setting up certificates
	I0927 17:40:15.329398  300155 provision.go:84] configureAuth start
	I0927 17:40:15.329463  300155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583947
	I0927 17:40:15.345963  300155 provision.go:143] copyHostCerts
	I0927 17:40:15.346054  300155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem (1078 bytes)
	I0927 17:40:15.346192  300155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem (1123 bytes)
	I0927 17:40:15.346250  300155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem (1675 bytes)
	I0927 17:40:15.346306  300155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem org=jenkins.addons-583947 san=[127.0.0.1 192.168.49.2 addons-583947 localhost minikube]
	I0927 17:40:15.598681  300155 provision.go:177] copyRemoteCerts
	I0927 17:40:15.598751  300155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:40:15.598794  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:15.617126  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:15.712843  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 17:40:15.737010  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:40:15.762669  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:40:15.786760  300155 provision.go:87] duration metric: took 457.347453ms to configureAuth
	I0927 17:40:15.786791  300155 ubuntu.go:193] setting minikube options for container-runtime
	I0927 17:40:15.787008  300155 config.go:182] Loaded profile config "addons-583947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:40:15.787021  300155 machine.go:96] duration metric: took 934.904073ms to provisionDockerMachine
	I0927 17:40:15.787030  300155 client.go:171] duration metric: took 9.889353174s to LocalClient.Create
	I0927 17:40:15.787059  300155 start.go:167] duration metric: took 9.889437965s to libmachine.API.Create "addons-583947"
	I0927 17:40:15.787075  300155 start.go:293] postStartSetup for "addons-583947" (driver="docker")
	I0927 17:40:15.787086  300155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:40:15.787166  300155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:40:15.787212  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:15.803722  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:15.903112  300155 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:40:15.906657  300155 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 17:40:15.906697  300155 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 17:40:15.906710  300155 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 17:40:15.906717  300155 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 17:40:15.906728  300155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/addons for local assets ...
	I0927 17:40:15.906797  300155 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/files for local assets ...
	I0927 17:40:15.906827  300155 start.go:296] duration metric: took 119.744633ms for postStartSetup
	I0927 17:40:15.907143  300155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583947
	I0927 17:40:15.923827  300155 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/config.json ...
	I0927 17:40:15.924113  300155 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:40:15.924171  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:15.940513  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:16.030757  300155 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 17:40:16.035523  300155 start.go:128] duration metric: took 10.141765675s to createHost
	I0927 17:40:16.035551  300155 start.go:83] releasing machines lock for "addons-583947", held for 10.141933131s
	I0927 17:40:16.035627  300155 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-583947
	I0927 17:40:16.052773  300155 ssh_runner.go:195] Run: cat /version.json
	I0927 17:40:16.052829  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:16.052844  300155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:40:16.052919  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:16.071405  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:16.077560  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:16.298615  300155 ssh_runner.go:195] Run: systemctl --version
	I0927 17:40:16.303174  300155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 17:40:16.307825  300155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 17:40:16.331981  300155 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 17:40:16.332104  300155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:40:16.363725  300155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 17:40:16.363764  300155 start.go:495] detecting cgroup driver to use...
	I0927 17:40:16.363799  300155 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 17:40:16.363859  300155 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 17:40:16.376852  300155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 17:40:16.388663  300155 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:40:16.388734  300155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:40:16.402552  300155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:40:16.417679  300155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:40:16.499442  300155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:40:16.600050  300155 docker.go:233] disabling docker service ...
	I0927 17:40:16.600140  300155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:40:16.619751  300155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:40:16.631777  300155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:40:16.725452  300155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:40:16.825004  300155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:40:16.836701  300155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:40:16.854106  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 17:40:16.864903  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 17:40:16.875767  300155 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 17:40:16.875890  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 17:40:16.886991  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 17:40:16.897968  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 17:40:16.908876  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 17:40:16.919799  300155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:40:16.929795  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 17:40:16.940359  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 17:40:16.951006  300155 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 17:40:16.961916  300155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:40:16.970714  300155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:40:16.979632  300155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:40:17.060724  300155 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 17:40:17.187137  300155 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0927 17:40:17.187260  300155 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0927 17:40:17.191079  300155 start.go:563] Will wait 60s for crictl version
	I0927 17:40:17.191153  300155 ssh_runner.go:195] Run: which crictl
	I0927 17:40:17.194761  300155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:40:17.232963  300155 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0927 17:40:17.233053  300155 ssh_runner.go:195] Run: containerd --version
	I0927 17:40:17.255402  300155 ssh_runner.go:195] Run: containerd --version
	I0927 17:40:17.282811  300155 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0927 17:40:17.284928  300155 cli_runner.go:164] Run: docker network inspect addons-583947 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 17:40:17.300335  300155 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 17:40:17.303875  300155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:40:17.315060  300155 kubeadm.go:883] updating cluster {Name:addons-583947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-583947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:40:17.315184  300155 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 17:40:17.315247  300155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:40:17.353925  300155 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 17:40:17.353951  300155 containerd.go:534] Images already preloaded, skipping extraction
	I0927 17:40:17.354020  300155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:40:17.393982  300155 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 17:40:17.394007  300155 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:40:17.394015  300155 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0927 17:40:17.394124  300155 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-583947 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-583947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:40:17.394195  300155 ssh_runner.go:195] Run: sudo crictl info
	I0927 17:40:17.431108  300155 cni.go:84] Creating CNI manager for ""
	I0927 17:40:17.431134  300155 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 17:40:17.431146  300155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:40:17.431203  300155 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-583947 NodeName:addons-583947 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:40:17.431381  300155 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-583947"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:40:17.431460  300155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:40:17.440642  300155 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:40:17.440724  300155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 17:40:17.449573  300155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0927 17:40:17.467816  300155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:40:17.486889  300155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0927 17:40:17.504581  300155 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 17:40:17.507866  300155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:40:17.518675  300155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:40:17.604525  300155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:40:17.621955  300155 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947 for IP: 192.168.49.2
	I0927 17:40:17.622030  300155 certs.go:194] generating shared ca certs ...
	I0927 17:40:17.622062  300155 certs.go:226] acquiring lock for ca certs: {Name:mk0891ce7588143d48f2c5fb538d185b80c1ae26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:17.622236  300155 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key
	I0927 17:40:17.855554  300155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt ...
	I0927 17:40:17.855589  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt: {Name:mkaa50c4b01de0557c702bf39d18a5fced802706 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:17.855792  300155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key ...
	I0927 17:40:17.855805  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key: {Name:mk4e9d83c617f9aff0eb80310c9f92fd19452576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:17.855907  300155 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key
	I0927 17:40:18.268584  300155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.crt ...
	I0927 17:40:18.268616  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.crt: {Name:mke2689da11e30dbfb41ca826eb6bbb5ff2216d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:18.268803  300155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key ...
	I0927 17:40:18.268820  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key: {Name:mk32c888d4fef2caf8b8726bf656f521563b7dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:18.268902  300155 certs.go:256] generating profile certs ...
	I0927 17:40:18.268966  300155 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.key
	I0927 17:40:18.268984  300155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt with IP's: []
	I0927 17:40:18.909733  300155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt ...
	I0927 17:40:18.909772  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: {Name:mkd9abf870fe4a863850e8ce5146fbf418f75870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:18.909974  300155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.key ...
	I0927 17:40:18.909989  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.key: {Name:mk8aaf11f0f4b9a411c47496a7de9f3b6ac16997 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:18.910088  300155 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key.ffc9b220
	I0927 17:40:18.910112  300155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt.ffc9b220 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 17:40:19.199614  300155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt.ffc9b220 ...
	I0927 17:40:19.199650  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt.ffc9b220: {Name:mkff9598aec20c7cf81e244ea59602d2fe6072f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:19.199856  300155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key.ffc9b220 ...
	I0927 17:40:19.199873  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key.ffc9b220: {Name:mka06059fe35b558b2142658eac7918830be41b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:19.199954  300155 certs.go:381] copying /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt.ffc9b220 -> /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt
	I0927 17:40:19.200043  300155 certs.go:385] copying /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key.ffc9b220 -> /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key
	I0927 17:40:19.200101  300155 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.key
	I0927 17:40:19.200123  300155 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.crt with IP's: []
	I0927 17:40:19.453845  300155 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.crt ...
	I0927 17:40:19.453878  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.crt: {Name:mke633d161fbf046651ee94e27f9c3c298a7d340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:19.454688  300155 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.key ...
	I0927 17:40:19.454709  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.key: {Name:mk88d834ca0179d67d6c59a3ec6ee40e19843f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:19.454922  300155 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:40:19.454966  300155 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem (1078 bytes)
	I0927 17:40:19.454993  300155 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:40:19.455022  300155 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem (1675 bytes)
	I0927 17:40:19.455610  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:40:19.483120  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 17:40:19.508088  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:40:19.532347  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 17:40:19.556973  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 17:40:19.582552  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:40:19.607004  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:40:19.631620  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 17:40:19.656600  300155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:40:19.681715  300155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:40:19.701028  300155 ssh_runner.go:195] Run: openssl version
	I0927 17:40:19.706576  300155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:40:19.716259  300155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:40:19.720025  300155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:40:19.720141  300155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:40:19.727271  300155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:40:19.737292  300155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:40:19.740504  300155 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:40:19.740573  300155 kubeadm.go:392] StartCluster: {Name:addons-583947 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-583947 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:40:19.740667  300155 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0927 17:40:19.740732  300155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:40:19.776705  300155 cri.go:89] found id: ""
	I0927 17:40:19.776777  300155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 17:40:19.785730  300155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 17:40:19.794790  300155 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 17:40:19.794935  300155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 17:40:19.804132  300155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 17:40:19.804154  300155 kubeadm.go:157] found existing configuration files:
	
	I0927 17:40:19.804209  300155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 17:40:19.813100  300155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 17:40:19.813169  300155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 17:40:19.821848  300155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 17:40:19.831163  300155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 17:40:19.831232  300155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 17:40:19.839872  300155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 17:40:19.848860  300155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 17:40:19.848941  300155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 17:40:19.857747  300155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 17:40:19.867078  300155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 17:40:19.867172  300155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 17:40:19.875690  300155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 17:40:19.916664  300155 kubeadm.go:310] W0927 17:40:19.915930    1022 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:40:19.917329  300155 kubeadm.go:310] W0927 17:40:19.916826    1022 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:40:19.937741  300155 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 17:40:20.009606  300155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 17:40:37.827120  300155 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 17:40:37.827182  300155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 17:40:37.827272  300155 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 17:40:37.827331  300155 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 17:40:37.827371  300155 kubeadm.go:310] OS: Linux
	I0927 17:40:37.827420  300155 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 17:40:37.827480  300155 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 17:40:37.827536  300155 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 17:40:37.827596  300155 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 17:40:37.827648  300155 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 17:40:37.827700  300155 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 17:40:37.827749  300155 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 17:40:37.827802  300155 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 17:40:37.827851  300155 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 17:40:37.827926  300155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 17:40:37.828023  300155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 17:40:37.828114  300155 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 17:40:37.828179  300155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 17:40:37.830344  300155 out.go:235]   - Generating certificates and keys ...
	I0927 17:40:37.830445  300155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 17:40:37.830515  300155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 17:40:37.830585  300155 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 17:40:37.830644  300155 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 17:40:37.830710  300155 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 17:40:37.830764  300155 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 17:40:37.830822  300155 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 17:40:37.830944  300155 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-583947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 17:40:37.831000  300155 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 17:40:37.831116  300155 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-583947 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 17:40:37.831185  300155 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 17:40:37.831251  300155 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 17:40:37.831298  300155 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 17:40:37.831357  300155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 17:40:37.831412  300155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 17:40:37.831471  300155 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 17:40:37.831528  300155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 17:40:37.831594  300155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 17:40:37.831651  300155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 17:40:37.831736  300155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 17:40:37.831805  300155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 17:40:37.833874  300155 out.go:235]   - Booting up control plane ...
	I0927 17:40:37.834004  300155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 17:40:37.834097  300155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 17:40:37.834200  300155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 17:40:37.834312  300155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 17:40:37.834412  300155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 17:40:37.834454  300155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 17:40:37.834597  300155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 17:40:37.834765  300155 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 17:40:37.834835  300155 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501709543s
	I0927 17:40:37.834927  300155 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 17:40:37.835002  300155 kubeadm.go:310] [api-check] The API server is healthy after 6.5021021s
	I0927 17:40:37.835144  300155 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 17:40:37.835286  300155 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 17:40:37.835375  300155 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 17:40:37.835557  300155 kubeadm.go:310] [mark-control-plane] Marking the node addons-583947 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 17:40:37.835617  300155 kubeadm.go:310] [bootstrap-token] Using token: di1gyv.0em8exomdga425jk
	I0927 17:40:37.837554  300155 out.go:235]   - Configuring RBAC rules ...
	I0927 17:40:37.837677  300155 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 17:40:37.837759  300155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 17:40:37.837906  300155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 17:40:37.838041  300155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 17:40:37.838154  300155 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 17:40:37.838238  300155 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 17:40:37.838351  300155 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 17:40:37.838404  300155 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 17:40:37.838450  300155 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 17:40:37.838456  300155 kubeadm.go:310] 
	I0927 17:40:37.838516  300155 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 17:40:37.838520  300155 kubeadm.go:310] 
	I0927 17:40:37.838596  300155 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 17:40:37.838599  300155 kubeadm.go:310] 
	I0927 17:40:37.838625  300155 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 17:40:37.838682  300155 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 17:40:37.838732  300155 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 17:40:37.838736  300155 kubeadm.go:310] 
	I0927 17:40:37.838789  300155 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 17:40:37.838793  300155 kubeadm.go:310] 
	I0927 17:40:37.838840  300155 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 17:40:37.838844  300155 kubeadm.go:310] 
	I0927 17:40:37.838896  300155 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 17:40:37.838969  300155 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 17:40:37.839036  300155 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 17:40:37.839040  300155 kubeadm.go:310] 
	I0927 17:40:37.839123  300155 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 17:40:37.839199  300155 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 17:40:37.839203  300155 kubeadm.go:310] 
	I0927 17:40:37.839286  300155 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token di1gyv.0em8exomdga425jk \
	I0927 17:40:37.839387  300155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e36d06a61fed1cb797521692277c6fed05d87d948beae49341a57851e31b2de5 \
	I0927 17:40:37.839408  300155 kubeadm.go:310] 	--control-plane 
	I0927 17:40:37.839412  300155 kubeadm.go:310] 
	I0927 17:40:37.839495  300155 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 17:40:37.839499  300155 kubeadm.go:310] 
	I0927 17:40:37.839581  300155 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token di1gyv.0em8exomdga425jk \
	I0927 17:40:37.839694  300155 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e36d06a61fed1cb797521692277c6fed05d87d948beae49341a57851e31b2de5 
	I0927 17:40:37.839703  300155 cni.go:84] Creating CNI manager for ""
	I0927 17:40:37.839710  300155 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 17:40:37.841821  300155 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 17:40:37.843865  300155 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 17:40:37.847676  300155 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 17:40:37.847698  300155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 17:40:37.866135  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 17:40:38.146729  300155 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 17:40:38.146903  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:38.146990  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-583947 minikube.k8s.io/updated_at=2024_09_27T17_40_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=addons-583947 minikube.k8s.io/primary=true
	I0927 17:40:38.286715  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:38.286792  300155 ops.go:34] apiserver oom_adj: -16
	I0927 17:40:38.786911  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:39.287455  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:39.787717  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:40.287646  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:40.787539  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:41.286846  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:41.787184  300155 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:40:41.887917  300155 kubeadm.go:1113] duration metric: took 3.741096118s to wait for elevateKubeSystemPrivileges
	I0927 17:40:41.887996  300155 kubeadm.go:394] duration metric: took 22.147442286s to StartCluster
	I0927 17:40:41.888020  300155 settings.go:142] acquiring lock: {Name:mk6311c862b19a3d49ef46b1e763e636e4ddd1db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:41.888182  300155 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:40:41.888817  300155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/kubeconfig: {Name:mk3cffd40ec049ac1050f606c0f198b3abfa6caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:40:41.889117  300155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 17:40:41.889168  300155 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 17:40:41.889429  300155 config.go:182] Loaded profile config "addons-583947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:40:41.889465  300155 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 17:40:41.889539  300155 addons.go:69] Setting yakd=true in profile "addons-583947"
	I0927 17:40:41.889554  300155 addons.go:234] Setting addon yakd=true in "addons-583947"
	I0927 17:40:41.889579  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.890088  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.890257  300155 addons.go:69] Setting inspektor-gadget=true in profile "addons-583947"
	I0927 17:40:41.890275  300155 addons.go:234] Setting addon inspektor-gadget=true in "addons-583947"
	I0927 17:40:41.890297  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.890698  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.891192  300155 addons.go:69] Setting cloud-spanner=true in profile "addons-583947"
	I0927 17:40:41.891274  300155 addons.go:234] Setting addon cloud-spanner=true in "addons-583947"
	I0927 17:40:41.891391  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.892164  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.893127  300155 addons.go:69] Setting metrics-server=true in profile "addons-583947"
	I0927 17:40:41.893208  300155 addons.go:234] Setting addon metrics-server=true in "addons-583947"
	I0927 17:40:41.893782  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.894391  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.893607  300155 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-583947"
	I0927 17:40:41.895531  300155 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-583947"
	I0927 17:40:41.893620  300155 addons.go:69] Setting registry=true in profile "addons-583947"
	I0927 17:40:41.893625  300155 addons.go:69] Setting storage-provisioner=true in profile "addons-583947"
	I0927 17:40:41.893629  300155 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-583947"
	I0927 17:40:41.893632  300155 addons.go:69] Setting volcano=true in profile "addons-583947"
	I0927 17:40:41.893636  300155 addons.go:69] Setting volumesnapshots=true in profile "addons-583947"
	I0927 17:40:41.893696  300155 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-583947"
	I0927 17:40:41.893705  300155 addons.go:69] Setting default-storageclass=true in profile "addons-583947"
	I0927 17:40:41.893710  300155 addons.go:69] Setting gcp-auth=true in profile "addons-583947"
	I0927 17:40:41.893714  300155 addons.go:69] Setting ingress=true in profile "addons-583947"
	I0927 17:40:41.893718  300155 addons.go:69] Setting ingress-dns=true in profile "addons-583947"
	I0927 17:40:41.893735  300155 out.go:177] * Verifying Kubernetes components...
	I0927 17:40:41.899750  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.901061  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899776  300155 addons.go:234] Setting addon registry=true in "addons-583947"
	I0927 17:40:41.909504  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.910152  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899784  300155 addons.go:234] Setting addon storage-provisioner=true in "addons-583947"
	I0927 17:40:41.911110  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.911830  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.916049  300155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:40:41.899793  300155 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-583947"
	I0927 17:40:41.929123  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899805  300155 addons.go:234] Setting addon volcano=true in "addons-583947"
	I0927 17:40:41.948449  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.949076  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899814  300155 addons.go:234] Setting addon volumesnapshots=true in "addons-583947"
	I0927 17:40:41.975739  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.976238  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.992707  300155 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 17:40:41.992989  300155 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 17:40:41.899843  300155 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-583947"
	I0927 17:40:41.993527  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:41.995514  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899850  300155 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-583947"
	I0927 17:40:42.006697  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:42.015204  300155 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 17:40:42.015274  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 17:40:42.015362  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:41.899862  300155 mustload.go:65] Loading cluster: addons-583947
	I0927 17:40:42.024760  300155 config.go:182] Loaded profile config "addons-583947": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:40:42.025060  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:41.899868  300155 addons.go:234] Setting addon ingress=true in "addons-583947"
	I0927 17:40:42.025230  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:42.028141  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:42.037395  300155 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 17:40:42.037425  300155 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 17:40:42.037507  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:41.899874  300155 addons.go:234] Setting addon ingress-dns=true in "addons-583947"
	I0927 17:40:42.045489  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:42.084138  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:42.135736  300155 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 17:40:42.137622  300155 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 17:40:42.137655  300155 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 17:40:42.137770  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.158229  300155 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 17:40:42.160338  300155 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-583947"
	I0927 17:40:42.160384  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:42.160874  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:42.196898  300155 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 17:40:42.203268  300155 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 17:40:42.203295  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 17:40:42.203377  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.227987  300155 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 17:40:42.231760  300155 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0927 17:40:42.236252  300155 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 17:40:42.236330  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 17:40:42.236447  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.256845  300155 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 17:40:42.257112  300155 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 17:40:42.232497  300155 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 17:40:42.257376  300155 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 17:40:42.257489  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.271531  300155 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:40:42.271560  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 17:40:42.271761  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.302676  300155 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 17:40:42.303632  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 17:40:42.306482  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 17:40:42.315053  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 17:40:42.315313  300155 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 17:40:42.315653  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.316126  300155 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 17:40:42.323058  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 17:40:42.325145  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 17:40:42.329445  300155 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 17:40:42.329531  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 17:40:42.329668  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.338609  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 17:40:42.340957  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 17:40:42.343555  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 17:40:42.351458  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 17:40:42.354672  300155 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 17:40:42.357175  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 17:40:42.357276  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 17:40:42.357424  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.360860  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.384261  300155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 17:40:42.389096  300155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:40:42.405630  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.407506  300155 addons.go:234] Setting addon default-storageclass=true in "addons-583947"
	I0927 17:40:42.407557  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:42.407984  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:42.412288  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:42.436742  300155 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 17:40:42.439023  300155 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 17:40:42.440735  300155 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 17:40:42.441728  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.446680  300155 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 17:40:42.446707  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 17:40:42.446777  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.451339  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.458249  300155 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 17:40:42.464841  300155 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 17:40:42.464870  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 17:40:42.464936  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.481604  300155 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 17:40:42.483382  300155 out.go:177]   - Using image docker.io/busybox:stable
	I0927 17:40:42.486477  300155 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 17:40:42.486501  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 17:40:42.486569  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.495500  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.533992  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.561486  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.581313  300155 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 17:40:42.581333  300155 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 17:40:42.581394  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:42.583471  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.590956  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.601878  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.604973  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.611691  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:42.625724  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	W0927 17:40:42.646174  300155 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 17:40:42.646210  300155 retry.go:31] will retry after 275.607841ms: ssh: handshake failed: EOF
	I0927 17:40:42.649455  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:43.166263  300155 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 17:40:43.166295  300155 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 17:40:43.246115  300155 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 17:40:43.246142  300155 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 17:40:43.326648  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 17:40:43.365860  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 17:40:43.411695  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 17:40:43.416979  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 17:40:43.494635  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 17:40:43.494659  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 17:40:43.509827  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 17:40:43.540152  300155 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 17:40:43.540225  300155 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 17:40:43.542580  300155 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 17:40:43.542659  300155 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 17:40:43.546377  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:40:43.550358  300155 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 17:40:43.550429  300155 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 17:40:43.568255  300155 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 17:40:43.568328  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 17:40:43.597806  300155 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 17:40:43.597877  300155 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 17:40:43.656971  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 17:40:43.657044  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 17:40:43.696642  300155 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 17:40:43.696712  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 17:40:43.733651  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 17:40:43.777341  300155 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 17:40:43.777420  300155 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 17:40:43.811828  300155 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 17:40:43.811900  300155 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 17:40:43.879895  300155 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 17:40:43.879966  300155 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 17:40:43.886148  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 17:40:43.886222  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 17:40:43.893345  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 17:40:43.943277  300155 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 17:40:43.943352  300155 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 17:40:44.039460  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 17:40:44.163766  300155 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 17:40:44.163842  300155 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 17:40:44.164996  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 17:40:44.165045  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 17:40:44.167524  300155 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 17:40:44.167569  300155 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 17:40:44.180179  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 17:40:44.180257  300155 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 17:40:44.232151  300155 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 17:40:44.232229  300155 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 17:40:44.453510  300155 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 17:40:44.453593  300155 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 17:40:44.471725  300155 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 17:40:44.471803  300155 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 17:40:44.535664  300155 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 17:40:44.535736  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 17:40:44.607132  300155 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 17:40:44.607205  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 17:40:44.635699  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 17:40:44.788952  300155 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0927 17:40:44.789028  300155 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0927 17:40:44.826395  300155 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.43725742s)
	I0927 17:40:44.827325  300155 node_ready.go:35] waiting up to 6m0s for node "addons-583947" to be "Ready" ...
	I0927 17:40:44.827586  300155 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.443300402s)
	I0927 17:40:44.827610  300155 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 17:40:44.828589  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.501911327s)
	I0927 17:40:44.832818  300155 node_ready.go:49] node "addons-583947" has status "Ready":"True"
	I0927 17:40:44.832845  300155 node_ready.go:38] duration metric: took 5.494446ms for node "addons-583947" to be "Ready" ...
	I0927 17:40:44.832889  300155 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:40:44.851722  300155 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace to be "Ready" ...
	I0927 17:40:44.880602  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 17:40:44.900477  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 17:40:44.900506  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 17:40:45.020658  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 17:40:45.332688  300155 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-583947" context rescaled to 1 replicas
	I0927 17:40:45.357060  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 17:40:45.357090  300155 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 17:40:45.399697  300155 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 17:40:45.399806  300155 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 17:40:45.612737  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 17:40:45.612765  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 17:40:45.667307  300155 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 17:40:45.667334  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0927 17:40:45.888194  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 17:40:45.904896  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 17:40:45.904923  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 17:40:46.108255  300155 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 17:40:46.108283  300155 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 17:40:46.395817  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 17:40:46.891567  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:47.513579  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.147678553s)
	I0927 17:40:47.513790  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.102067445s)
	I0927 17:40:47.513829  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.096827509s)
	I0927 17:40:47.513856  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.003960126s)
	I0927 17:40:47.513972  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.967528766s)
	W0927 17:40:47.537524  300155 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 17:40:49.359020  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:49.623702  300155 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 17:40:49.623798  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:49.649008  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:50.382229  300155 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 17:40:50.422180  300155 addons.go:234] Setting addon gcp-auth=true in "addons-583947"
	I0927 17:40:50.422237  300155 host.go:66] Checking if "addons-583947" exists ...
	I0927 17:40:50.422695  300155 cli_runner.go:164] Run: docker container inspect addons-583947 --format={{.State.Status}}
	I0927 17:40:50.447135  300155 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 17:40:50.447190  300155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-583947
	I0927 17:40:50.476308  300155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/addons-583947/id_rsa Username:docker}
	I0927 17:40:51.860031  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:53.405311  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.511902405s)
	I0927 17:40:53.405451  300155 addons.go:475] Verifying addon ingress=true in "addons-583947"
	I0927 17:40:53.405469  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.769747109s)
	I0927 17:40:53.405493  300155 addons.go:475] Verifying addon metrics-server=true in "addons-583947"
	I0927 17:40:53.405534  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.524906227s)
	I0927 17:40:53.405744  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.384952818s)
	W0927 17:40:53.405781  300155 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 17:40:53.405813  300155 retry.go:31] will retry after 370.234658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 17:40:53.405375  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.365842455s)
	I0927 17:40:53.405877  300155 addons.go:475] Verifying addon registry=true in "addons-583947"
	I0927 17:40:53.406107  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.517762326s)
	I0927 17:40:53.405331  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.671509135s)
	I0927 17:40:53.409269  300155 out.go:177] * Verifying ingress addon...
	I0927 17:40:53.409301  300155 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-583947 service yakd-dashboard -n yakd-dashboard
	
	I0927 17:40:53.410647  300155 out.go:177] * Verifying registry addon...
	I0927 17:40:53.412537  300155 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 17:40:53.413710  300155 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 17:40:53.536317  300155 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 17:40:53.536349  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:53.537346  300155 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 17:40:53.537365  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:53.777060  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 17:40:53.895661  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:53.925407  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:53.926005  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:54.213965  300155 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.766801084s)
	I0927 17:40:54.214100  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.818076256s)
	I0927 17:40:54.214132  300155 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-583947"
	I0927 17:40:54.216673  300155 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 17:40:54.216748  300155 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 17:40:54.219222  300155 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 17:40:54.220170  300155 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 17:40:54.221644  300155 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 17:40:54.221666  300155 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 17:40:54.226179  300155 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 17:40:54.226211  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:54.284642  300155 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 17:40:54.284670  300155 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 17:40:54.313496  300155 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 17:40:54.313524  300155 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 17:40:54.393098  300155 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 17:40:54.420266  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:54.421205  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:54.746250  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:54.919233  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:54.919774  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:55.226517  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:55.392898  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.615788235s)
	I0927 17:40:55.419889  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:55.421442  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:55.675954  300155 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.282815255s)
	I0927 17:40:55.679459  300155 addons.go:475] Verifying addon gcp-auth=true in "addons-583947"
	I0927 17:40:55.682514  300155 out.go:177] * Verifying gcp-auth addon...
	I0927 17:40:55.685622  300155 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 17:40:55.691291  300155 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 17:40:55.725735  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:55.918957  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:55.919605  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:56.226592  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:56.364555  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:56.419729  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:56.421200  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:56.726235  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:56.920838  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:56.922144  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:57.300802  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:57.418589  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:57.419055  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:57.725221  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:57.918897  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:57.920413  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:58.291599  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:58.417787  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:58.418803  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:58.731880  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:58.866363  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:40:58.920480  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:58.924501  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:59.225292  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:59.417689  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:40:59.418281  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:59.726262  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:40:59.917343  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:40:59.918751  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:00.292030  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:00.427872  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:00.430402  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:00.725019  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:00.918461  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:00.919332  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:01.225978  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:01.358409  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:01.418996  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:01.419257  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:01.725069  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:01.917759  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:01.918207  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:02.231562  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:02.417571  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:02.418565  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:02.725883  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:02.916829  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:02.918276  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:03.225373  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:03.358934  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:03.417841  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:03.418416  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:03.725053  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:03.917924  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:03.918808  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:04.225657  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:04.416911  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:04.418841  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:04.725732  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:04.917301  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:04.918140  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:05.225569  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:05.419498  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:05.420364  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:05.725117  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:05.858573  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:05.917715  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:05.918065  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:06.225336  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:06.417599  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:06.418414  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:06.725713  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:06.917357  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:06.917739  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:07.225000  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:07.417711  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:07.417906  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:07.725560  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:07.916980  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:07.918601  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:08.226187  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:08.358580  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:08.418412  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:08.419349  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:08.725622  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:08.917143  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:08.918284  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:09.229771  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:09.417557  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:09.419108  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:09.725635  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:09.917305  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:09.919152  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:10.232498  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:10.418880  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:10.420259  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:10.725949  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:10.858572  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:10.918053  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:10.919682  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:11.228688  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:11.418488  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:11.419024  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:11.729303  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:11.918493  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:11.919615  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:12.225229  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:12.417753  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:12.418832  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:12.725588  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:12.858952  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:12.917808  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:12.919787  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:13.224713  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:13.417653  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:13.419289  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:13.725697  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:13.917347  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:13.917909  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:14.234724  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:14.418036  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:14.419285  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:14.725183  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:14.918456  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:14.919047  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:15.225750  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:15.357851  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:15.416922  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:15.419032  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:15.725049  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:15.917493  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:15.919164  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:16.226649  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:16.418911  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:16.419158  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:16.724724  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:16.917672  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:16.918670  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:17.226069  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:17.358175  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:17.417786  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:17.418302  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:17.725034  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:17.917708  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:17.918935  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:18.225622  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:18.418691  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:18.419384  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:18.725587  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:18.918289  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:18.918336  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:19.228844  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:19.358529  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:19.418055  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:19.418863  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:19.724692  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:19.917732  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:19.918674  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:20.226310  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:20.417751  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:20.418826  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:20.725385  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:20.918194  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:20.919050  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:21.225500  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:21.360013  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:21.417662  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:21.418564  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:21.725141  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:21.917394  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:21.919994  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:22.231175  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:22.420352  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:22.421074  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:22.725122  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:22.917793  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:22.918733  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:23.291430  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:23.418168  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:23.418942  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:23.725441  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:23.858353  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:23.918060  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:23.918941  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:24.227643  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:24.416835  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:24.417914  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:24.725375  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:24.917742  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:24.919481  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:25.225924  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:25.417908  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:25.419266  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:25.726464  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:25.918745  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:25.919657  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:26.293728  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:26.358299  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:26.418550  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:26.418868  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:26.803687  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:26.918156  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:26.920008  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:27.225191  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:27.418988  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:27.419944  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:27.725183  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:27.917827  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:27.919640  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:28.225907  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:28.359498  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:28.419137  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:28.419832  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:28.724999  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:28.917910  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:28.919219  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:29.226553  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:29.419984  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:29.421566  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:29.724984  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:29.919423  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:29.921280  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:30.228655  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:30.365521  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:30.421914  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:30.422897  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:30.795288  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:30.917774  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:30.919039  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:31.225616  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:31.419023  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:31.420872  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:31.725197  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:31.918209  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:31.918878  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:32.225780  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:32.417578  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:32.419635  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:32.725966  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:32.858383  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:32.917015  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:32.917878  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:33.226268  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:33.419684  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:33.420949  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:33.725632  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:33.917903  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:33.918432  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:34.225350  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:34.419118  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:34.421464  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:34.794432  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:34.858688  300155 pod_ready.go:103] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"False"
	I0927 17:41:34.918234  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:34.919468  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:35.225519  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:35.418919  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:35.420214  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:35.732670  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:35.857604  300155 pod_ready.go:93] pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:35.857632  300155 pod_ready.go:82] duration metric: took 51.005868502s for pod "coredns-7c65d6cfc9-7qtwt" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.857644  300155 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ljzsq" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.859470  300155 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-ljzsq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ljzsq" not found
	I0927 17:41:35.859495  300155 pod_ready.go:82] duration metric: took 1.844583ms for pod "coredns-7c65d6cfc9-ljzsq" in "kube-system" namespace to be "Ready" ...
	E0927 17:41:35.859507  300155 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-ljzsq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ljzsq" not found
	I0927 17:41:35.859514  300155 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.864410  300155 pod_ready.go:93] pod "etcd-addons-583947" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:35.864437  300155 pod_ready.go:82] duration metric: took 4.915486ms for pod "etcd-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.864452  300155 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.869341  300155 pod_ready.go:93] pod "kube-apiserver-addons-583947" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:35.869366  300155 pod_ready.go:82] duration metric: took 4.906748ms for pod "kube-apiserver-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.869378  300155 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.874569  300155 pod_ready.go:93] pod "kube-controller-manager-addons-583947" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:35.874594  300155 pod_ready.go:82] duration metric: took 5.208224ms for pod "kube-controller-manager-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.874609  300155 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xp2qc" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:35.917369  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:35.918532  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:36.056210  300155 pod_ready.go:93] pod "kube-proxy-xp2qc" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:36.056239  300155 pod_ready.go:82] duration metric: took 181.621466ms for pod "kube-proxy-xp2qc" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:36.056252  300155 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:36.225501  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:36.418162  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:36.419563  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:36.456914  300155 pod_ready.go:93] pod "kube-scheduler-addons-583947" in "kube-system" namespace has status "Ready":"True"
	I0927 17:41:36.456940  300155 pod_ready.go:82] duration metric: took 400.679624ms for pod "kube-scheduler-addons-583947" in "kube-system" namespace to be "Ready" ...
	I0927 17:41:36.456951  300155 pod_ready.go:39] duration metric: took 51.624041806s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:41:36.456976  300155 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:41:36.457055  300155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:41:36.473696  300155 api_server.go:72] duration metric: took 54.584492192s to wait for apiserver process to appear ...
	I0927 17:41:36.473762  300155 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:41:36.473799  300155 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 17:41:36.482990  300155 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 17:41:36.485665  300155 api_server.go:141] control plane version: v1.31.1
	I0927 17:41:36.485746  300155 api_server.go:131] duration metric: took 11.96242ms to wait for apiserver health ...
	I0927 17:41:36.485771  300155 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:41:36.663054  300155 system_pods.go:59] 18 kube-system pods found
	I0927 17:41:36.663143  300155 system_pods.go:61] "coredns-7c65d6cfc9-7qtwt" [66a7a9f2-5f87-4cd8-a043-824722429797] Running
	I0927 17:41:36.663165  300155 system_pods.go:61] "csi-hostpath-attacher-0" [4fce2bbb-2b89-4b3b-9402-b505f6699450] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 17:41:36.663175  300155 system_pods.go:61] "csi-hostpath-resizer-0" [e8ac2b55-0cab-462f-aee6-b1bc4a3148a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 17:41:36.663185  300155 system_pods.go:61] "csi-hostpathplugin-sxlw6" [f13f701e-4b13-4973-978c-e7d79e6f1d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 17:41:36.663190  300155 system_pods.go:61] "etcd-addons-583947" [f9dcf636-2692-42b2-8128-67dfec8422d9] Running
	I0927 17:41:36.663212  300155 system_pods.go:61] "kindnet-9m7z7" [7f807db9-6972-4a6a-aade-69ee8405fbd3] Running
	I0927 17:41:36.663227  300155 system_pods.go:61] "kube-apiserver-addons-583947" [f630e865-4ac6-4e44-9fb2-1ae0f50e3ef3] Running
	I0927 17:41:36.663231  300155 system_pods.go:61] "kube-controller-manager-addons-583947" [961b771a-bdbe-4db1-827b-32512adde3dc] Running
	I0927 17:41:36.663236  300155 system_pods.go:61] "kube-ingress-dns-minikube" [7325121b-061a-45ee-98e4-7c3bb953fe0a] Running
	I0927 17:41:36.663253  300155 system_pods.go:61] "kube-proxy-xp2qc" [3e14d115-e325-43cf-906b-233ef212e08e] Running
	I0927 17:41:36.663266  300155 system_pods.go:61] "kube-scheduler-addons-583947" [5ee31ad4-d0d6-4ceb-b4b3-b1584ecbe1bb] Running
	I0927 17:41:36.663272  300155 system_pods.go:61] "metrics-server-84c5f94fbc-h78zb" [866e445c-c734-4c72-b8c1-55af9c7258fd] Running
	I0927 17:41:36.663279  300155 system_pods.go:61] "nvidia-device-plugin-daemonset-9r5dq" [c3c31223-d780-4d40-836d-0fdcf06acfdf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 17:41:36.663285  300155 system_pods.go:61] "registry-66c9cd494c-zg7pk" [c6e96250-7c38-480c-842d-2d5612850d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 17:41:36.663291  300155 system_pods.go:61] "registry-proxy-w654q" [5450ef2a-71a8-4be2-bd6d-c91f93d716b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 17:41:36.663299  300155 system_pods.go:61] "snapshot-controller-56fcc65765-cr5dc" [9b4b6f4a-c17e-41f9-938e-c13d5ae17c6c] Running
	I0927 17:41:36.663303  300155 system_pods.go:61] "snapshot-controller-56fcc65765-v4w7f" [81ecf2b1-f403-4c50-b5a5-c6039a6483f8] Running
	I0927 17:41:36.663309  300155 system_pods.go:61] "storage-provisioner" [eb043422-2620-4d4f-a305-0377e599ab4d] Running
	I0927 17:41:36.663344  300155 system_pods.go:74] duration metric: took 177.542591ms to wait for pod list to return data ...
	I0927 17:41:36.663361  300155 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:41:36.799404  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:36.855053  300155 default_sa.go:45] found service account: "default"
	I0927 17:41:36.855081  300155 default_sa.go:55] duration metric: took 191.712739ms for default service account to be created ...
	I0927 17:41:36.855091  300155 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:41:36.916867  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:36.918442  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:37.064965  300155 system_pods.go:86] 18 kube-system pods found
	I0927 17:41:37.065008  300155 system_pods.go:89] "coredns-7c65d6cfc9-7qtwt" [66a7a9f2-5f87-4cd8-a043-824722429797] Running
	I0927 17:41:37.065022  300155 system_pods.go:89] "csi-hostpath-attacher-0" [4fce2bbb-2b89-4b3b-9402-b505f6699450] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 17:41:37.065032  300155 system_pods.go:89] "csi-hostpath-resizer-0" [e8ac2b55-0cab-462f-aee6-b1bc4a3148a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 17:41:37.065091  300155 system_pods.go:89] "csi-hostpathplugin-sxlw6" [f13f701e-4b13-4973-978c-e7d79e6f1d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 17:41:37.065098  300155 system_pods.go:89] "etcd-addons-583947" [f9dcf636-2692-42b2-8128-67dfec8422d9] Running
	I0927 17:41:37.065104  300155 system_pods.go:89] "kindnet-9m7z7" [7f807db9-6972-4a6a-aade-69ee8405fbd3] Running
	I0927 17:41:37.065113  300155 system_pods.go:89] "kube-apiserver-addons-583947" [f630e865-4ac6-4e44-9fb2-1ae0f50e3ef3] Running
	I0927 17:41:37.065118  300155 system_pods.go:89] "kube-controller-manager-addons-583947" [961b771a-bdbe-4db1-827b-32512adde3dc] Running
	I0927 17:41:37.065135  300155 system_pods.go:89] "kube-ingress-dns-minikube" [7325121b-061a-45ee-98e4-7c3bb953fe0a] Running
	I0927 17:41:37.065140  300155 system_pods.go:89] "kube-proxy-xp2qc" [3e14d115-e325-43cf-906b-233ef212e08e] Running
	I0927 17:41:37.065144  300155 system_pods.go:89] "kube-scheduler-addons-583947" [5ee31ad4-d0d6-4ceb-b4b3-b1584ecbe1bb] Running
	I0927 17:41:37.065149  300155 system_pods.go:89] "metrics-server-84c5f94fbc-h78zb" [866e445c-c734-4c72-b8c1-55af9c7258fd] Running
	I0927 17:41:37.065158  300155 system_pods.go:89] "nvidia-device-plugin-daemonset-9r5dq" [c3c31223-d780-4d40-836d-0fdcf06acfdf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 17:41:37.065164  300155 system_pods.go:89] "registry-66c9cd494c-zg7pk" [c6e96250-7c38-480c-842d-2d5612850d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 17:41:37.065170  300155 system_pods.go:89] "registry-proxy-w654q" [5450ef2a-71a8-4be2-bd6d-c91f93d716b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 17:41:37.065175  300155 system_pods.go:89] "snapshot-controller-56fcc65765-cr5dc" [9b4b6f4a-c17e-41f9-938e-c13d5ae17c6c] Running
	I0927 17:41:37.065180  300155 system_pods.go:89] "snapshot-controller-56fcc65765-v4w7f" [81ecf2b1-f403-4c50-b5a5-c6039a6483f8] Running
	I0927 17:41:37.065184  300155 system_pods.go:89] "storage-provisioner" [eb043422-2620-4d4f-a305-0377e599ab4d] Running
	I0927 17:41:37.065192  300155 system_pods.go:126] duration metric: took 210.094968ms to wait for k8s-apps to be running ...
	I0927 17:41:37.065201  300155 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:41:37.065309  300155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:41:37.079681  300155 system_svc.go:56] duration metric: took 14.467875ms WaitForService to wait for kubelet
	I0927 17:41:37.079818  300155 kubeadm.go:582] duration metric: took 55.190617969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:41:37.079863  300155 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:41:37.225368  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:37.255828  300155 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 17:41:37.255864  300155 node_conditions.go:123] node cpu capacity is 2
	I0927 17:41:37.255879  300155 node_conditions.go:105] duration metric: took 175.985993ms to run NodePressure ...
	I0927 17:41:37.255892  300155 start.go:241] waiting for startup goroutines ...
	I0927 17:41:37.418593  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:37.420249  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:37.725087  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:37.918371  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:37.919182  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:38.225528  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:38.417669  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:38.418519  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:38.725962  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:38.917349  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:38.919246  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:39.225627  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:39.417154  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:39.419723  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:39.727491  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:39.918503  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:39.919268  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:40.225641  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:40.418456  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:40.419931  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:40.725896  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:40.920572  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:40.921885  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:41.226008  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:41.418164  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:41.418664  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:41.725682  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:41.917970  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:41.919826  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:42.227184  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:42.418258  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:42.420593  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:42.790829  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:42.917013  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:42.917741  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:43.226077  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:43.416935  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:43.417718  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:43.791991  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:43.917254  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:43.919383  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:44.224990  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:44.420261  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:44.423087  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:44.726651  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:44.918498  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:44.919893  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:45.285847  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:45.421538  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:45.424037  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:45.726706  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:45.918540  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:45.919893  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:46.225473  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:46.418734  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:46.419695  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:46.795566  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:46.926897  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:46.927100  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:47.225370  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:47.417028  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:47.417816  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:47.725606  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:47.920951  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:47.921650  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:48.225763  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:48.416452  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:48.418577  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:48.726288  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:48.919455  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:48.920380  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:49.226113  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:49.422078  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:49.422353  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:49.725674  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:49.922122  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:49.922794  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:50.224768  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:50.417341  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:50.417996  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:50.725323  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:50.919608  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:50.920787  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:51.225643  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:51.418420  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:51.419138  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:51.727187  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:51.921365  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:51.923254  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:52.225578  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:52.422568  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:52.423009  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:52.725318  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:52.919950  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:52.921017  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:53.225540  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:53.417572  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:53.417850  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:53.724903  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:53.917236  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:53.919166  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:54.225638  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:54.454364  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:54.455480  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:54.725492  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:54.919048  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:54.920096  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:55.225870  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:55.420650  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 17:41:55.422719  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:55.787721  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:55.918243  300155 kapi.go:107] duration metric: took 1m2.504529602s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 17:41:55.919690  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:56.225952  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:56.417674  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:56.725641  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:56.916889  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:57.225433  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:57.417918  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:57.729139  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:57.917786  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:58.225702  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:58.417315  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:58.792903  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:58.917140  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:59.225714  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:59.416977  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:41:59.725134  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:41:59.917541  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:00.279072  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:00.421665  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:00.725671  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:00.917284  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:01.226213  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:01.417370  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:01.726756  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:01.917920  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:02.226416  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:02.418499  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:02.725944  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:02.918296  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:03.224903  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:03.417111  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:03.725402  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:03.916951  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:04.225618  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:04.417115  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:04.726070  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:04.921098  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:05.225100  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:05.417216  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:05.725073  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:05.920188  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:06.225758  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:06.417846  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:06.725038  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:06.917675  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:07.228213  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:07.421439  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:07.726708  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:07.918079  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:08.226501  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:08.417587  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:08.726082  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:08.917894  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:09.225848  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:09.417137  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:09.730323  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:09.917438  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:10.225948  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:10.417405  300155 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 17:42:10.791842  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:10.922885  300155 kapi.go:107] duration metric: took 1m17.51034636s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 17:42:11.225973  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:11.727569  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:12.236560  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:12.725506  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:13.225811  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:13.725308  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:14.224632  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:14.725627  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 17:42:15.225636  300155 kapi.go:107] duration metric: took 1m21.005460936s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 17:42:18.700591  300155 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 17:42:18.700612  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:19.190980  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:19.690478  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:20.189445  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:20.689697  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:21.189862  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:21.690083  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:22.189388  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:22.689182  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:23.189945  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:23.689740  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:24.189667  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:24.688814  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:25.190182  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:25.690059  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:26.189426  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:26.689510  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:27.196801  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:27.688977  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:28.189107  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:28.689715  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:29.189805  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:29.690158  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:30.196866  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:30.689615  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:31.189512  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:31.689116  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:32.189542  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:32.688504  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:33.189852  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:33.688693  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:34.189420  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:34.689089  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:35.190203  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:35.689457  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:36.189515  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:36.689495  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:37.197088  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:37.690295  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:38.189542  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:38.689198  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:39.190307  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:39.689493  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:40.190222  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:40.688972  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:41.190487  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:41.690373  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:42.190771  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:42.689372  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:43.189881  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:43.689233  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:44.189004  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:44.688858  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:45.191089  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:45.690420  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:46.189803  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:46.689953  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:47.198736  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:47.689359  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:48.190090  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:48.689805  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:49.190045  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:49.689155  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:50.189830  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:50.689819  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:51.190375  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:51.690162  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:52.189811  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:52.689522  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:53.191784  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:53.689576  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:54.188783  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:54.689499  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:55.189807  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:55.689799  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:56.190099  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:56.689775  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:57.190412  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:57.689359  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:58.190161  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:58.691684  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:59.191876  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:42:59.689712  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:00.215558  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:00.689176  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:01.189722  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:01.688878  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:02.190114  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:02.690431  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:03.189512  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:03.689654  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:04.190624  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:04.689370  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:05.188958  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:05.690198  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:06.189666  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:06.689312  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:07.199046  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:07.690441  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:08.189800  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:08.689413  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:09.188920  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:09.690057  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:10.189809  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:10.689862  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:11.189912  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:11.690401  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:12.189107  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:12.689475  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:13.189974  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:13.689810  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:14.188909  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:14.689533  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:15.189647  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:15.689738  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:16.189118  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:16.690142  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:17.196377  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:17.689293  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:18.190112  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:18.689406  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:19.189415  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:19.689545  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:20.189057  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:20.688890  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:21.189772  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:21.689913  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:22.190539  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:22.689848  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:23.190352  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:23.689931  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:24.189572  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:24.689332  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:25.189424  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:25.689951  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:26.189452  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:26.689539  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:27.194888  300155 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 17:43:27.690229  300155 kapi.go:107] duration metric: took 2m32.004632097s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 17:43:27.692332  300155 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-583947 cluster.
	I0927 17:43:27.694370  300155 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 17:43:27.695797  300155 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 17:43:27.697678  300155 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, volcano, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0927 17:43:27.699361  300155 addons.go:510] duration metric: took 2m45.809892522s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget volcano yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0927 17:43:27.699406  300155 start.go:246] waiting for cluster config update ...
	I0927 17:43:27.699431  300155 start.go:255] writing updated cluster config ...
	I0927 17:43:27.699734  300155 ssh_runner.go:195] Run: rm -f paused
	I0927 17:43:28.050360  300155 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 17:43:28.052426  300155 out.go:177] * Done! kubectl is now configured to use "addons-583947" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e9149244f669d       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   d95c1061cc9c2       gcp-auth-89d5ffd79-8frvg
	3920cd3bfb01e       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	f039e97a8d860       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	0eff59d19e493       1a9605c872c1d       4 minutes ago       Running             admission                                0                   6b4c92e4b237e       volcano-admission-5874dfdd79-dvr69
	20b31db3cb728       289a818c8d9c5       4 minutes ago       Running             controller                               0                   314b1499af5ea       ingress-nginx-controller-bc57996ff-4bppd
	2001754b57a08       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	0c92a07ea9d61       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   47d6f89710a64       nvidia-device-plugin-daemonset-9r5dq
	db58bd763252b       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   6a3ee53ba1978       csi-hostpath-resizer-0
	83054eac77679       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   9db0b4ba1f2df       volcano-scheduler-6c9778cbdf-mbvfg
	2c9501a8d7440       c9cf76bb104e1       4 minutes ago       Running             registry                                 0                   661ce41f33702       registry-66c9cd494c-zg7pk
	fb344b2bb44be       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   6de3b096da59d       volcano-controllers-789ffc5785-vx57k
	3b771c5cadc67       420193b27261a       4 minutes ago       Exited              patch                                    0                   5017dd1675a52       ingress-nginx-admission-patch-bk854
	cc1bdfbe203ab       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   3f683efad49eb       csi-hostpath-attacher-0
	9a724a849fa2e       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   09f69866b5dd2       local-path-provisioner-86d989889c-q4pxh
	22a8e262b7b11       77bdba588b953       4 minutes ago       Running             yakd                                     0                   71b18e1bc9552       yakd-dashboard-67d98fc6b-dm4sw
	e39098b2f0a73       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	3df1b61784410       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   e2259a900947a       registry-proxy-w654q
	67d095db5f9ea       420193b27261a       5 minutes ago       Exited              create                                   0                   19ebe13bdbdbd       ingress-nginx-admission-create-22b75
	3fbcdb12b6acc       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   3684987d45496       cloud-spanner-emulator-5b584cc74-979s9
	694e192499be1       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   707d457cbb6b1       coredns-7c65d6cfc9-7qtwt
	4334fcdc1edc2       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   f23af42761f2f       snapshot-controller-56fcc65765-v4w7f
	c82a1bd5a0a6b       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	860f5efe517fa       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   cad05acc4d58e       metrics-server-84c5f94fbc-h78zb
	6923522606e84       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   fdf5b25ff7882       snapshot-controller-56fcc65765-cr5dc
	1cd3236d17973       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   a6c04b71a7a0c       csi-hostpathplugin-sxlw6
	96a28c564e915       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   622d304682165       gadget-x67ht
	fa39ac1ec8d3f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   63738b95b573c       kube-ingress-dns-minikube
	696f8c7aa8afa       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   e2bebf504e46d       storage-provisioner
	4da7a7dbf34de       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   5e84700a7987e       kindnet-9m7z7
	d6f0f812dfc47       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   d4ef13d4c52d0       kube-proxy-xp2qc
	ba7f1a3252f5f       27e3830e14027       6 minutes ago       Running             etcd                                     0                   50cab87c2ccca       etcd-addons-583947
	b8914c3ce3501       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   0c037c10ef28d       kube-scheduler-addons-583947
	e8e585e0f5102       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   0ff92bb5e74c6       kube-controller-manager-addons-583947
	50a07df108621       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   04df27a85b21c       kube-apiserver-addons-583947
	
	
	==> containerd <==
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.239149555Z" level=info msg="TearDown network for sandbox \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\" successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.239189866Z" level=info msg="StopPodSandbox for \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\" returns successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.239896180Z" level=info msg="RemovePodSandbox for \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\""
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.240026369Z" level=info msg="Forcibly stopping sandbox \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\""
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.248374368Z" level=info msg="TearDown network for sandbox \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\" successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.254208697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.254339551Z" level=info msg="RemovePodSandbox \"d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45\" returns successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.255037315Z" level=info msg="StopPodSandbox for \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\""
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.262710441Z" level=info msg="TearDown network for sandbox \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\" successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.262749817Z" level=info msg="StopPodSandbox for \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\" returns successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.263223830Z" level=info msg="RemovePodSandbox for \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\""
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.263263756Z" level=info msg="Forcibly stopping sandbox \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\""
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.271214770Z" level=info msg="TearDown network for sandbox \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\" successfully"
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.276723318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 27 17:43:37 addons-583947 containerd[821]: time="2024-09-27T17:43:37.276867111Z" level=info msg="RemovePodSandbox \"c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244\" returns successfully"
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.281291592Z" level=info msg="RemoveContainer for \"0b8258051a47e7f0a17452d487888c4c4744a388f24d60a399a8c2d3e993bd63\""
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.287101988Z" level=info msg="RemoveContainer for \"0b8258051a47e7f0a17452d487888c4c4744a388f24d60a399a8c2d3e993bd63\" returns successfully"
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.289538452Z" level=info msg="StopPodSandbox for \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\""
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.297497396Z" level=info msg="TearDown network for sandbox \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\" successfully"
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.297543689Z" level=info msg="StopPodSandbox for \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\" returns successfully"
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.298151502Z" level=info msg="RemovePodSandbox for \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\""
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.298297741Z" level=info msg="Forcibly stopping sandbox \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\""
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.308299192Z" level=info msg="TearDown network for sandbox \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\" successfully"
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.314939184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 27 17:44:37 addons-583947 containerd[821]: time="2024-09-27T17:44:37.315061742Z" level=info msg="RemovePodSandbox \"1917d3d19123c64ce3280c2cd772a896d88052cf6ac3ef21775ad98c876625b3\" returns successfully"
	
	
	==> coredns [694e192499be19fabc3b6cac11ebd89628613ffafd22d5ae26768d5a0be19031] <==
	[INFO] 10.244.0.7:45441 - 22345 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0002483s
	[INFO] 10.244.0.7:50510 - 37719 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001883337s
	[INFO] 10.244.0.7:50510 - 13393 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001951512s
	[INFO] 10.244.0.7:33771 - 20364 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075486s
	[INFO] 10.244.0.7:33771 - 7822 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084873s
	[INFO] 10.244.0.7:52519 - 61923 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000112852s
	[INFO] 10.244.0.7:52519 - 799 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000258572s
	[INFO] 10.244.0.7:45413 - 13163 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075223s
	[INFO] 10.244.0.7:45413 - 18024 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000164322s
	[INFO] 10.244.0.7:37510 - 47233 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078161s
	[INFO] 10.244.0.7:37510 - 43911 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000172772s
	[INFO] 10.244.0.7:54800 - 25841 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001743096s
	[INFO] 10.244.0.7:54800 - 42995 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001771494s
	[INFO] 10.244.0.7:38807 - 16989 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065747s
	[INFO] 10.244.0.7:38807 - 34655 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086292s
	[INFO] 10.244.0.24:43110 - 12235 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000352866s
	[INFO] 10.244.0.24:52660 - 15427 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000396836s
	[INFO] 10.244.0.24:58686 - 4434 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187542s
	[INFO] 10.244.0.24:50400 - 49742 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167038s
	[INFO] 10.244.0.24:41922 - 6871 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132946s
	[INFO] 10.244.0.24:41536 - 35793 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000145877s
	[INFO] 10.244.0.24:39261 - 58567 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002311826s
	[INFO] 10.244.0.24:39639 - 34342 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00206766s
	[INFO] 10.244.0.24:59755 - 23054 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003346825s
	[INFO] 10.244.0.24:46361 - 1533 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003289701s
	
	
	==> describe nodes <==
	Name:               addons-583947
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-583947
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=addons-583947
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-583947
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-583947"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:40:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-583947
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:46:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:43:41 +0000   Fri, 27 Sep 2024 17:40:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:43:41 +0000   Fri, 27 Sep 2024 17:40:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:43:41 +0000   Fri, 27 Sep 2024 17:40:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:43:41 +0000   Fri, 27 Sep 2024 17:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-583947
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f983571fe41145e89b863e1ac3391b0a
	  System UUID:                ed8029b8-ca39-4998-9bc2-aa3f8ba8b41e
	  Boot ID:                    7a34a0f0-976f-42af-914d-3a2d2373d850
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-979s9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-x67ht                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gcp-auth                    gcp-auth-89d5ffd79-8frvg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-4bppd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m56s
	  kube-system                 coredns-7c65d6cfc9-7qtwt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m5s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 csi-hostpathplugin-sxlw6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  kube-system                 etcd-addons-583947                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m10s
	  kube-system                 kindnet-9m7z7                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m5s
	  kube-system                 kube-apiserver-addons-583947                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-addons-583947       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-proxy-xp2qc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-scheduler-addons-583947                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 metrics-server-84c5f94fbc-h78zb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m
	  kube-system                 nvidia-device-plugin-daemonset-9r5dq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 registry-66c9cd494c-zg7pk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-proxy-w654q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 snapshot-controller-56fcc65765-cr5dc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-v4w7f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  local-path-storage          local-path-provisioner-86d989889c-q4pxh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  volcano-system              volcano-admission-5874dfdd79-dvr69          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-controllers-789ffc5785-vx57k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-scheduler-6c9778cbdf-mbvfg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dm4sw              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m4s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node addons-583947 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m18s (x7 over 6m18s)  kubelet          Node addons-583947 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node addons-583947 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m10s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m10s                  kubelet          Node addons-583947 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m10s                  kubelet          Node addons-583947 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m10s                  kubelet          Node addons-583947 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m6s                   node-controller  Node addons-583947 event: Registered Node addons-583947 in Controller
	
	
	==> dmesg <==
	[Sep27 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014096] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.483709] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.056541] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.017956] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.717856] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.302460] kauditd_printk_skb: 36 callbacks suppressed
	[Sep27 16:44] hrtimer: interrupt took 20639535 ns
	[Sep27 17:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ba7f1a3252f5fcdcfd95589714a8504d0609c30d862137862065522985a3b9b9] <==
	{"level":"info","ts":"2024-09-27T17:40:30.527275Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T17:40:30.527766Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T17:40:30.528940Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-27T17:40:30.529388Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T17:40:30.529522Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-27T17:40:30.793300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:30.793524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:30.793642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-27T17:40:30.793753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:30.793830Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:30.793926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:30.794001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-27T17:40:30.796808Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-583947 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T17:40:30.797093Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:40:30.797487Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T17:40:30.798307Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:40:30.799315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-27T17:40:30.801860Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T17:40:30.802900Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T17:40:30.805314Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:30.825260Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T17:40:30.825467Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T17:40:30.825612Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:30.825774Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T17:40:30.825873Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [e9149244f669d6bbb007e2554f9817cebe5d10c4a237bb9c1473b364f4b68ee5] <==
	2024/09/27 17:43:26 GCP Auth Webhook started!
	2024/09/27 17:43:45 Ready to marshal response ...
	2024/09/27 17:43:45 Ready to write response ...
	2024/09/27 17:43:46 Ready to marshal response ...
	2024/09/27 17:43:46 Ready to write response ...
	
	
	==> kernel <==
	 17:46:47 up  1:29,  0 users,  load average: 0.12, 1.15, 2.04
	Linux addons-583947 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4da7a7dbf34ded45c1d329f3f388c7df7049354635573cf71f36f40df7dc58ef] <==
	I0927 17:44:43.918661       1 main.go:299] handling current node
	I0927 17:44:53.923605       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:44:53.923649       1 main.go:299] handling current node
	I0927 17:45:03.925551       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:03.925592       1 main.go:299] handling current node
	I0927 17:45:13.925380       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:13.925421       1 main.go:299] handling current node
	I0927 17:45:23.921355       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:23.921465       1 main.go:299] handling current node
	I0927 17:45:33.926919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:33.926953       1 main.go:299] handling current node
	I0927 17:45:43.918577       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:43.918618       1 main.go:299] handling current node
	I0927 17:45:53.921426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:45:53.921529       1 main.go:299] handling current node
	I0927 17:46:03.925342       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:46:03.925380       1 main.go:299] handling current node
	I0927 17:46:13.919056       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:46:13.919106       1 main.go:299] handling current node
	I0927 17:46:23.922163       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:46:23.922203       1 main.go:299] handling current node
	I0927 17:46:33.920069       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:46:33.920106       1 main.go:299] handling current node
	I0927 17:46:43.918507       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 17:46:43.918544       1 main.go:299] handling current node
	
	
	==> kube-apiserver [50a07df1086218a338d1280db02c3d35e227c66cb27645ea79410f9428b81238] <==
	W0927 17:42:00.598360       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:01.629552       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:02.635552       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:03.639485       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:04.713683       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:05.766614       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:06.783231       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:07.866845       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:08.883968       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:09.975255       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:11.007850       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:12.063594       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:13.310752       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:14.350336       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:15.419249       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:16.503047       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:17.594145       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.222.88:443: connect: connection refused
	W0927 17:42:18.648349       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.91.94:443: connect: connection refused
	E0927 17:42:18.648392       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.91.94:443: connect: connection refused" logger="UnhandledError"
	W0927 17:42:58.653499       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.91.94:443: connect: connection refused
	E0927 17:42:58.653541       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.91.94:443: connect: connection refused" logger="UnhandledError"
	W0927 17:42:58.708022       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.91.94:443: connect: connection refused
	E0927 17:42:58.708064       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.91.94:443: connect: connection refused" logger="UnhandledError"
	I0927 17:43:45.666804       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0927 17:43:45.703871       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [e8e585e0f5102223ffed2d3138c3eafe55433b3ac44915670f48ba6f6a3878c4] <==
	I0927 17:42:58.696981       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:42:58.718893       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:42:58.725143       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:42:58.734524       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:42:58.746779       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:00.145511       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:43:00.289690       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:01.243831       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:01.341155       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:43:02.254662       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:02.353483       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:43:02.359052       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:02.364957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:43:02.373604       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0927 17:43:03.265509       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:03.275170       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:03.283807       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0927 17:43:27.332805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.724867ms"
	I0927 17:43:27.332889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="47.015µs"
	I0927 17:43:32.031589       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0927 17:43:32.067827       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0927 17:43:33.008269       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0927 17:43:33.056341       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0927 17:43:41.809831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-583947"
	I0927 17:43:45.365769       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [d6f0f812dfc4709b87cf5775b4e2a0822f1bb2796e0a8a0d148ec88681c9f577] <==
	I0927 17:40:43.421897       1 server_linux.go:66] "Using iptables proxy"
	I0927 17:40:43.530539       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 17:40:43.530624       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:40:43.560406       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 17:40:43.560464       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:40:43.562457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:40:43.562934       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:40:43.562958       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:40:43.574754       1 config.go:199] "Starting service config controller"
	I0927 17:40:43.574797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:40:43.574832       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:40:43.574837       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:40:43.575550       1 config.go:328] "Starting node config controller"
	I0927 17:40:43.575563       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:40:43.675059       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:40:43.675119       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:40:43.676750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b8914c3ce35010b29d21c0e655c2e5414feee786b9a25b4c86dc390e60c88c9d] <==
	W0927 17:40:35.744639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:35.744819       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.745943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 17:40:35.746114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.748825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 17:40:35.749038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.749268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:35.749376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.749425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 17:40:35.749601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.749823       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 17:40:35.749946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.750124       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 17:40:35.750180       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 17:40:35.750416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 17:40:35.750471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.750608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 17:40:35.750639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.750785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 17:40:35.750805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.750969       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 17:40:35.750992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 17:40:35.750977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 17:40:35.751256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 17:40:36.934417       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 17:43:01 addons-583947 kubelet[1493]: I0927 17:43:01.223665    1493 scope.go:117] "RemoveContainer" containerID="c96110c7177bc6170c1f3fb06e55b2c1757c55bb74818f3a1a6cbced492084f2"
	Sep 27 17:43:01 addons-583947 kubelet[1493]: I0927 17:43:01.510920    1493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcj5\" (UniqueName: \"kubernetes.io/projected/8550d486-abbb-4149-9e05-a4ed68e4bfb2-kube-api-access-kdcj5\") pod \"8550d486-abbb-4149-9e05-a4ed68e4bfb2\" (UID: \"8550d486-abbb-4149-9e05-a4ed68e4bfb2\") "
	Sep 27 17:43:01 addons-583947 kubelet[1493]: I0927 17:43:01.517497    1493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8550d486-abbb-4149-9e05-a4ed68e4bfb2-kube-api-access-kdcj5" (OuterVolumeSpecName: "kube-api-access-kdcj5") pod "8550d486-abbb-4149-9e05-a4ed68e4bfb2" (UID: "8550d486-abbb-4149-9e05-a4ed68e4bfb2"). InnerVolumeSpecName "kube-api-access-kdcj5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:43:01 addons-583947 kubelet[1493]: I0927 17:43:01.612402    1493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kdcj5\" (UniqueName: \"kubernetes.io/projected/8550d486-abbb-4149-9e05-a4ed68e4bfb2-kube-api-access-kdcj5\") on node \"addons-583947\" DevicePath \"\""
	Sep 27 17:43:02 addons-583947 kubelet[1493]: I0927 17:43:02.227746    1493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d745345370f2c5b183b008ce3a00cce8842f7e8bbb110b628a3284556c235c45"
	Sep 27 17:43:02 addons-583947 kubelet[1493]: I0927 17:43:02.417155    1493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tl97\" (UniqueName: \"kubernetes.io/projected/c7cb8a25-1125-48a7-92bd-9a2928dfee51-kube-api-access-7tl97\") pod \"c7cb8a25-1125-48a7-92bd-9a2928dfee51\" (UID: \"c7cb8a25-1125-48a7-92bd-9a2928dfee51\") "
	Sep 27 17:43:02 addons-583947 kubelet[1493]: I0927 17:43:02.422485    1493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cb8a25-1125-48a7-92bd-9a2928dfee51-kube-api-access-7tl97" (OuterVolumeSpecName: "kube-api-access-7tl97") pod "c7cb8a25-1125-48a7-92bd-9a2928dfee51" (UID: "c7cb8a25-1125-48a7-92bd-9a2928dfee51"). InnerVolumeSpecName "kube-api-access-7tl97". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 17:43:02 addons-583947 kubelet[1493]: I0927 17:43:02.517834    1493 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7tl97\" (UniqueName: \"kubernetes.io/projected/c7cb8a25-1125-48a7-92bd-9a2928dfee51-kube-api-access-7tl97\") on node \"addons-583947\" DevicePath \"\""
	Sep 27 17:43:03 addons-583947 kubelet[1493]: I0927 17:43:03.239677    1493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e7b699d91db94a50fc23fb77c27efe31275a8d3b597baa7ab3a05cfe1c6244"
	Sep 27 17:43:08 addons-583947 kubelet[1493]: I0927 17:43:08.203061    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-979s9" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:43:24 addons-583947 kubelet[1493]: I0927 17:43:24.203049    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9r5dq" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:43:26 addons-583947 kubelet[1493]: I0927 17:43:26.203801    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zg7pk" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:43:32 addons-583947 kubelet[1493]: I0927 17:43:32.048965    1493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-8frvg" podStartSLOduration=71.081775704 podStartE2EDuration="1m14.048943986s" podCreationTimestamp="2024-09-27 17:42:18 +0000 UTC" firstStartedPulling="2024-09-27 17:43:23.913812905 +0000 UTC m=+166.872022964" lastFinishedPulling="2024-09-27 17:43:26.880981187 +0000 UTC m=+169.839191246" observedRunningTime="2024-09-27 17:43:27.325938931 +0000 UTC m=+170.284149031" watchObservedRunningTime="2024-09-27 17:43:32.048943986 +0000 UTC m=+175.007154053"
	Sep 27 17:43:33 addons-583947 kubelet[1493]: I0927 17:43:33.207192    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8550d486-abbb-4149-9e05-a4ed68e4bfb2" path="/var/lib/kubelet/pods/8550d486-abbb-4149-9e05-a4ed68e4bfb2/volumes"
	Sep 27 17:43:33 addons-583947 kubelet[1493]: I0927 17:43:33.207604    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cb8a25-1125-48a7-92bd-9a2928dfee51" path="/var/lib/kubelet/pods/c7cb8a25-1125-48a7-92bd-9a2928dfee51/volumes"
	Sep 27 17:43:37 addons-583947 kubelet[1493]: I0927 17:43:37.211016    1493 scope.go:117] "RemoveContainer" containerID="76a20bce00c80f21bf07af87006594fa5b0ef151ba958fd0cf604acfc6b90185"
	Sep 27 17:43:37 addons-583947 kubelet[1493]: I0927 17:43:37.221060    1493 scope.go:117] "RemoveContainer" containerID="3e9b78a0d6a9117b71d7a5193ffc2bf979e9c144933e7b99db753c0abd7172ed"
	Sep 27 17:43:47 addons-583947 kubelet[1493]: I0927 17:43:47.207854    1493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac1ff67-9bc5-4d1d-a0bf-6449a73b4f62" path="/var/lib/kubelet/pods/4ac1ff67-9bc5-4d1d-a0bf-6449a73b4f62/volumes"
	Sep 27 17:44:20 addons-583947 kubelet[1493]: I0927 17:44:20.204114    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-w654q" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:44:37 addons-583947 kubelet[1493]: I0927 17:44:37.279724    1493 scope.go:117] "RemoveContainer" containerID="0b8258051a47e7f0a17452d487888c4c4744a388f24d60a399a8c2d3e993bd63"
	Sep 27 17:44:46 addons-583947 kubelet[1493]: I0927 17:44:46.203203    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9r5dq" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:44:51 addons-583947 kubelet[1493]: I0927 17:44:51.204321    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zg7pk" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:45:49 addons-583947 kubelet[1493]: I0927 17:45:49.204396    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-w654q" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:46:01 addons-583947 kubelet[1493]: I0927 17:46:01.203356    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9r5dq" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 17:46:16 addons-583947 kubelet[1493]: I0927 17:46:16.204116    1493 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zg7pk" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [696f8c7aa8afa7b64a08a4c08336f1064e2af23a302f1ce5871274fb59ef3203] <==
	I0927 17:40:48.250204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 17:40:48.272063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 17:40:48.272106       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 17:40:48.283318       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 17:40:48.283518       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-583947_1f43a6e3-0c5e-4fb8-af6e-5aa0ae6f5e87!
	I0927 17:40:48.284714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c8c6016-ff25-4a6d-a71b-7ebe9a05093e", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-583947_1f43a6e3-0c5e-4fb8-af6e-5aa0ae6f5e87 became leader
	I0927 17:40:48.383652       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-583947_1f43a6e3-0c5e-4fb8-af6e-5aa0ae6f5e87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-583947 -n addons-583947
helpers_test.go:261: (dbg) Run:  kubectl --context addons-583947 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-22b75 ingress-nginx-admission-patch-bk854 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-583947 describe pod ingress-nginx-admission-create-22b75 ingress-nginx-admission-patch-bk854 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-583947 describe pod ingress-nginx-admission-create-22b75 ingress-nginx-admission-patch-bk854 test-job-nginx-0: exit status 1 (104.693218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-22b75" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bk854" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-583947 describe pod ingress-nginx-admission-create-22b75 ingress-nginx-admission-patch-bk854 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (201.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-313926 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-313926 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.015760982s)

                                                
                                                
-- stdout --
	* [old-k8s-version-313926] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-313926" primary control-plane node in "old-k8s-version-313926" cluster
	* Pulling base image v0.0.45-1727108449-19696 ...
	* Restarting existing docker container for "old-k8s-version-313926" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-313926 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:31:18.397211  509591 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:31:18.398008  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:31:18.398062  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:31:18.398084  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:31:18.398458  509591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:31:18.398981  509591 out.go:352] Setting JSON to false
	I0927 18:31:18.400064  509591 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8030,"bootTime":1727453849,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 18:31:18.400188  509591 start.go:139] virtualization:  
	I0927 18:31:18.403634  509591 out.go:177] * [old-k8s-version-313926] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 18:31:18.407183  509591 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:31:18.409182  509591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:31:18.410159  509591 notify.go:220] Checking for updates...
	I0927 18:31:18.414771  509591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 18:31:18.416843  509591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 18:31:18.419008  509591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 18:31:18.421304  509591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:31:18.424220  509591 config.go:182] Loaded profile config "old-k8s-version-313926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0927 18:31:18.427344  509591 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 18:31:18.429786  509591 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:31:18.478543  509591 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 18:31:18.478682  509591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:31:18.554011  509591 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-27 18:31:18.534893962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:31:18.554223  509591 docker.go:318] overlay module found
	I0927 18:31:18.556925  509591 out.go:177] * Using the docker driver based on existing profile
	I0927 18:31:18.559441  509591 start.go:297] selected driver: docker
	I0927 18:31:18.559464  509591 start.go:901] validating driver "docker" against &{Name:old-k8s-version-313926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-313926 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:31:18.559586  509591 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:31:18.560202  509591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:31:18.644234  509591 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-27 18:31:18.632165044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:31:18.644647  509591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:31:18.644668  509591 cni.go:84] Creating CNI manager for ""
	I0927 18:31:18.644714  509591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 18:31:18.644749  509591 start.go:340] cluster config:
	{Name:old-k8s-version-313926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-313926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:31:18.647064  509591 out.go:177] * Starting "old-k8s-version-313926" primary control-plane node in "old-k8s-version-313926" cluster
	I0927 18:31:18.648902  509591 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 18:31:18.650944  509591 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 18:31:18.653343  509591 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 18:31:18.653426  509591 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 18:31:18.653437  509591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0927 18:31:18.653448  509591 cache.go:56] Caching tarball of preloaded images
	I0927 18:31:18.653545  509591 preload.go:172] Found /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 18:31:18.653556  509591 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0927 18:31:18.653694  509591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/config.json ...
	I0927 18:31:18.673764  509591 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0927 18:31:18.673785  509591 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0927 18:31:18.673804  509591 cache.go:194] Successfully downloaded all kic artifacts
	I0927 18:31:18.673833  509591 start.go:360] acquireMachinesLock for old-k8s-version-313926: {Name:mkc77509ee396006bcd23a884aa26c27d131d8ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:31:18.673890  509591 start.go:364] duration metric: took 38.826µs to acquireMachinesLock for "old-k8s-version-313926"
	I0927 18:31:18.673911  509591 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:31:18.673933  509591 fix.go:54] fixHost starting: 
	I0927 18:31:18.674203  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:18.692286  509591 fix.go:112] recreateIfNeeded on old-k8s-version-313926: state=Stopped err=<nil>
	W0927 18:31:18.692325  509591 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:31:18.694799  509591 out.go:177] * Restarting existing docker container for "old-k8s-version-313926" ...
	I0927 18:31:18.697619  509591 cli_runner.go:164] Run: docker start old-k8s-version-313926
	I0927 18:31:19.002390  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:19.024824  509591 kic.go:430] container "old-k8s-version-313926" state is running.
	I0927 18:31:19.025340  509591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-313926
	I0927 18:31:19.053747  509591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/config.json ...
	I0927 18:31:19.054057  509591 machine.go:93] provisionDockerMachine start ...
	I0927 18:31:19.054123  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:19.075762  509591 main.go:141] libmachine: Using SSH client type: native
	I0927 18:31:19.076160  509591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0927 18:31:19.076175  509591 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:31:19.076746  509591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48456->127.0.0.1:33433: read: connection reset by peer
	I0927 18:31:22.225137  509591 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-313926
	
	I0927 18:31:22.225164  509591 ubuntu.go:169] provisioning hostname "old-k8s-version-313926"
	I0927 18:31:22.225263  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:22.248521  509591 main.go:141] libmachine: Using SSH client type: native
	I0927 18:31:22.248801  509591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0927 18:31:22.248814  509591 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-313926 && echo "old-k8s-version-313926" | sudo tee /etc/hostname
	I0927 18:31:22.397788  509591 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-313926
	
	I0927 18:31:22.397941  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:22.416198  509591 main.go:141] libmachine: Using SSH client type: native
	I0927 18:31:22.416466  509591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0927 18:31:22.416485  509591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-313926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-313926/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-313926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:31:22.557396  509591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:31:22.557424  509591 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19712-294006/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-294006/.minikube}
	I0927 18:31:22.557485  509591 ubuntu.go:177] setting up certificates
	I0927 18:31:22.557497  509591 provision.go:84] configureAuth start
	I0927 18:31:22.557615  509591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-313926
	I0927 18:31:22.574683  509591 provision.go:143] copyHostCerts
	I0927 18:31:22.574755  509591 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem, removing ...
	I0927 18:31:22.574776  509591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem
	I0927 18:31:22.574853  509591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem (1675 bytes)
	I0927 18:31:22.574971  509591 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem, removing ...
	I0927 18:31:22.574981  509591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem
	I0927 18:31:22.575023  509591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem (1078 bytes)
	I0927 18:31:22.575096  509591 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem, removing ...
	I0927 18:31:22.575107  509591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem
	I0927 18:31:22.575134  509591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem (1123 bytes)
	I0927 18:31:22.575198  509591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-313926 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-313926]
	I0927 18:31:22.980139  509591 provision.go:177] copyRemoteCerts
	I0927 18:31:22.980259  509591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:31:22.980338  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:23.003780  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:23.098935  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 18:31:23.128254  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 18:31:23.154963  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 18:31:23.188818  509591 provision.go:87] duration metric: took 631.29866ms to configureAuth
	I0927 18:31:23.188848  509591 ubuntu.go:193] setting minikube options for container-runtime
	I0927 18:31:23.189042  509591 config.go:182] Loaded profile config "old-k8s-version-313926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0927 18:31:23.189058  509591 machine.go:96] duration metric: took 4.134988234s to provisionDockerMachine
	I0927 18:31:23.189067  509591 start.go:293] postStartSetup for "old-k8s-version-313926" (driver="docker")
	I0927 18:31:23.189078  509591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:31:23.189132  509591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:31:23.189176  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:23.212491  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:23.311991  509591 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:31:23.319680  509591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 18:31:23.319720  509591 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 18:31:23.319738  509591 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 18:31:23.319746  509591 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 18:31:23.319761  509591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/addons for local assets ...
	I0927 18:31:23.319827  509591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/files for local assets ...
	I0927 18:31:23.319913  509591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem -> 2993952.pem in /etc/ssl/certs
	I0927 18:31:23.320022  509591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:31:23.344164  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem --> /etc/ssl/certs/2993952.pem (1708 bytes)
	I0927 18:31:23.385855  509591 start.go:296] duration metric: took 196.771493ms for postStartSetup
	I0927 18:31:23.385938  509591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:31:23.385983  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:23.410373  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:23.514437  509591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 18:31:23.519607  509591 fix.go:56] duration metric: took 4.845681494s for fixHost
	I0927 18:31:23.519630  509591 start.go:83] releasing machines lock for "old-k8s-version-313926", held for 4.845730652s
	I0927 18:31:23.519705  509591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-313926
	I0927 18:31:23.543053  509591 ssh_runner.go:195] Run: cat /version.json
	I0927 18:31:23.543104  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:23.543185  509591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:31:23.543291  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:23.579286  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:23.588593  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:23.854308  509591 ssh_runner.go:195] Run: systemctl --version
	I0927 18:31:23.859848  509591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 18:31:23.866823  509591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 18:31:23.900168  509591 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 18:31:23.900270  509591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:31:23.911351  509591 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 18:31:23.911386  509591 start.go:495] detecting cgroup driver to use...
	I0927 18:31:23.911426  509591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 18:31:23.911493  509591 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 18:31:23.938286  509591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 18:31:23.952517  509591 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:31:23.952596  509591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:31:23.967952  509591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:31:23.982625  509591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:31:24.106749  509591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:31:24.231622  509591 docker.go:233] disabling docker service ...
	I0927 18:31:24.231710  509591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:31:24.247558  509591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:31:24.261006  509591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:31:24.374743  509591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:31:24.500707  509591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:31:24.515683  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:31:24.535981  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0927 18:31:24.547845  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 18:31:24.559781  509591 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 18:31:24.559869  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 18:31:24.571440  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 18:31:24.582905  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 18:31:24.594181  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 18:31:24.605499  509591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:31:24.616729  509591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 18:31:24.630194  509591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:31:24.640419  509591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:31:24.650708  509591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:31:24.758539  509591 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 18:31:24.989894  509591 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0927 18:31:24.989978  509591 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0927 18:31:24.994595  509591 start.go:563] Will wait 60s for crictl version
	I0927 18:31:24.994705  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:31:25.000150  509591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:31:25.049582  509591 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0927 18:31:25.049729  509591 ssh_runner.go:195] Run: containerd --version
	I0927 18:31:25.074124  509591 ssh_runner.go:195] Run: containerd --version
	I0927 18:31:25.102693  509591 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0927 18:31:25.105210  509591 cli_runner.go:164] Run: docker network inspect old-k8s-version-313926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 18:31:25.126079  509591 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0927 18:31:25.131148  509591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:31:25.143927  509591 kubeadm.go:883] updating cluster {Name:old-k8s-version-313926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-313926 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:31:25.144065  509591 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 18:31:25.144125  509591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:31:25.192186  509591 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 18:31:25.192211  509591 containerd.go:534] Images already preloaded, skipping extraction
	I0927 18:31:25.192292  509591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:31:25.263929  509591 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 18:31:25.263954  509591 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:31:25.263964  509591 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0927 18:31:25.264074  509591 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-313926 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-313926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:31:25.264149  509591 ssh_runner.go:195] Run: sudo crictl info
	I0927 18:31:25.304046  509591 cni.go:84] Creating CNI manager for ""
	I0927 18:31:25.304074  509591 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 18:31:25.304084  509591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:31:25.304104  509591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-313926 NodeName:old-k8s-version-313926 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 18:31:25.304231  509591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-313926"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:31:25.304305  509591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 18:31:25.314684  509591 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:31:25.314791  509591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:31:25.325728  509591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0927 18:31:25.349502  509591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:31:25.372934  509591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0927 18:31:25.392647  509591 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0927 18:31:25.396369  509591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:31:25.407071  509591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:31:25.514750  509591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:31:25.540115  509591 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926 for IP: 192.168.76.2
	I0927 18:31:25.540139  509591 certs.go:194] generating shared ca certs ...
	I0927 18:31:25.540155  509591 certs.go:226] acquiring lock for ca certs: {Name:mk0891ce7588143d48f2c5fb538d185b80c1ae26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:31:25.540346  509591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key
	I0927 18:31:25.540410  509591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key
	I0927 18:31:25.540425  509591 certs.go:256] generating profile certs ...
	I0927 18:31:25.540538  509591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.key
	I0927 18:31:25.540654  509591 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/apiserver.key.4328ff32
	I0927 18:31:25.540728  509591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/proxy-client.key
	I0927 18:31:25.540859  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395.pem (1338 bytes)
	W0927 18:31:25.540915  509591 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395_empty.pem, impossibly tiny 0 bytes
	I0927 18:31:25.540930  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:31:25.540974  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem (1078 bytes)
	I0927 18:31:25.541026  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:31:25.541058  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem (1675 bytes)
	I0927 18:31:25.541122  509591 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem (1708 bytes)
	I0927 18:31:25.541775  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:31:25.580429  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 18:31:25.611735  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:31:25.647966  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 18:31:25.678160  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 18:31:25.710552  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:31:25.747880  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:31:25.774065  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:31:25.800895  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395.pem --> /usr/share/ca-certificates/299395.pem (1338 bytes)
	I0927 18:31:25.826570  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem --> /usr/share/ca-certificates/2993952.pem (1708 bytes)
	I0927 18:31:25.852166  509591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:31:25.877291  509591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:31:25.895868  509591 ssh_runner.go:195] Run: openssl version
	I0927 18:31:25.901348  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2993952.pem && ln -fs /usr/share/ca-certificates/2993952.pem /etc/ssl/certs/2993952.pem"
	I0927 18:31:25.911243  509591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2993952.pem
	I0927 18:31:25.915361  509591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:50 /usr/share/ca-certificates/2993952.pem
	I0927 18:31:25.915426  509591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2993952.pem
	I0927 18:31:25.922205  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2993952.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:31:25.933435  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:31:25.944298  509591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:31:25.947760  509591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:31:25.947828  509591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:31:25.954923  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:31:25.963547  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/299395.pem && ln -fs /usr/share/ca-certificates/299395.pem /etc/ssl/certs/299395.pem"
	I0927 18:31:25.973004  509591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299395.pem
	I0927 18:31:25.976432  509591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:50 /usr/share/ca-certificates/299395.pem
	I0927 18:31:25.976537  509591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299395.pem
	I0927 18:31:25.983748  509591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/299395.pem /etc/ssl/certs/51391683.0"
	I0927 18:31:25.992867  509591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:31:25.996381  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:31:26.003183  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:31:26.010111  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:31:26.016864  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:31:26.023514  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:31:26.033934  509591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:31:26.043119  509591 kubeadm.go:392] StartCluster: {Name:old-k8s-version-313926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-313926 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:31:26.043228  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0927 18:31:26.043303  509591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:31:26.084324  509591 cri.go:89] found id: "3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:31:26.084363  509591 cri.go:89] found id: "0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:31:26.084369  509591 cri.go:89] found id: "7aa383561495d5d220ddb1b38c563543c2797f6022b19e1070e09be5d5d33d31"
	I0927 18:31:26.084373  509591 cri.go:89] found id: "4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:31:26.084376  509591 cri.go:89] found id: "9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:31:26.084381  509591 cri.go:89] found id: "0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:31:26.084385  509591 cri.go:89] found id: "67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:31:26.084388  509591 cri.go:89] found id: "4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:31:26.084391  509591 cri.go:89] found id: ""
	I0927 18:31:26.084476  509591 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0927 18:31:26.098211  509591 cri.go:116] JSON = null
	W0927 18:31:26.098308  509591 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0927 18:31:26.098397  509591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:31:26.107724  509591 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 18:31:26.107745  509591 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 18:31:26.107802  509591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 18:31:26.117171  509591 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 18:31:26.117950  509591 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-313926" does not appear in /home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 18:31:26.118229  509591 kubeconfig.go:62] /home/jenkins/minikube-integration/19712-294006/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-313926" cluster setting kubeconfig missing "old-k8s-version-313926" context setting]
	I0927 18:31:26.118762  509591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/kubeconfig: {Name:mk3cffd40ec049ac1050f606c0f198b3abfa6caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:31:26.120195  509591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 18:31:26.129111  509591 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0927 18:31:26.129191  509591 kubeadm.go:597] duration metric: took 21.438279ms to restartPrimaryControlPlane
	I0927 18:31:26.129208  509591 kubeadm.go:394] duration metric: took 86.117045ms to StartCluster
	I0927 18:31:26.129224  509591 settings.go:142] acquiring lock: {Name:mk6311c862b19a3d49ef46b1e763e636e4ddd1db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:31:26.129336  509591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 18:31:26.130297  509591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/kubeconfig: {Name:mk3cffd40ec049ac1050f606c0f198b3abfa6caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:31:26.130529  509591 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 18:31:26.130821  509591 config.go:182] Loaded profile config "old-k8s-version-313926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0927 18:31:26.130887  509591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 18:31:26.130954  509591 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-313926"
	I0927 18:31:26.130971  509591 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-313926"
	W0927 18:31:26.130982  509591 addons.go:243] addon storage-provisioner should already be in state true
	I0927 18:31:26.131005  509591 host.go:66] Checking if "old-k8s-version-313926" exists ...
	I0927 18:31:26.131497  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:26.132057  509591 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-313926"
	I0927 18:31:26.132085  509591 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-313926"
	W0927 18:31:26.132093  509591 addons.go:243] addon metrics-server should already be in state true
	I0927 18:31:26.132123  509591 host.go:66] Checking if "old-k8s-version-313926" exists ...
	I0927 18:31:26.132169  509591 addons.go:69] Setting dashboard=true in profile "old-k8s-version-313926"
	I0927 18:31:26.132182  509591 addons.go:234] Setting addon dashboard=true in "old-k8s-version-313926"
	W0927 18:31:26.132195  509591 addons.go:243] addon dashboard should already be in state true
	I0927 18:31:26.132220  509591 host.go:66] Checking if "old-k8s-version-313926" exists ...
	I0927 18:31:26.132584  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:26.132736  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:26.135248  509591 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-313926"
	I0927 18:31:26.135274  509591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-313926"
	I0927 18:31:26.135598  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:26.138537  509591 out.go:177] * Verifying Kubernetes components...
	I0927 18:31:26.140740  509591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:31:26.173504  509591 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0927 18:31:26.175582  509591 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0927 18:31:26.177412  509591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:31:26.177481  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0927 18:31:26.177492  509591 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0927 18:31:26.177558  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:26.179510  509591 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:31:26.179530  509591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 18:31:26.179599  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:26.185159  509591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 18:31:26.187106  509591 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 18:31:26.187129  509591 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 18:31:26.187205  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:26.189393  509591 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-313926"
	W0927 18:31:26.189414  509591 addons.go:243] addon default-storageclass should already be in state true
	I0927 18:31:26.189442  509591 host.go:66] Checking if "old-k8s-version-313926" exists ...
	I0927 18:31:26.189863  509591 cli_runner.go:164] Run: docker container inspect old-k8s-version-313926 --format={{.State.Status}}
	I0927 18:31:26.246330  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:26.261727  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:26.278388  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:26.278462  509591 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:26.278476  509591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 18:31:26.278533  509591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-313926
	I0927 18:31:26.306402  509591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/old-k8s-version-313926/id_rsa Username:docker}
	I0927 18:31:26.355533  509591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:31:26.371124  509591 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-313926" to be "Ready" ...
	I0927 18:31:26.430429  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:31:26.482790  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0927 18:31:26.482817  509591 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0927 18:31:26.497747  509591 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 18:31:26.497771  509591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 18:31:26.518227  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0927 18:31:26.518254  509591 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0927 18:31:26.521079  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:26.555920  509591 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 18:31:26.555961  509591 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 18:31:26.588924  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0927 18:31:26.588967  509591 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0927 18:31:26.610734  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.610780  509591 retry.go:31] will retry after 309.765335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.619056  509591 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 18:31:26.619092  509591 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 18:31:26.657057  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0927 18:31:26.657092  509591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0927 18:31:26.666352  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:26.698651  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.698693  509591 retry.go:31] will retry after 126.427512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.703622  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0927 18:31:26.703698  509591 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0927 18:31:26.725022  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0927 18:31:26.725099  509591 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0927 18:31:26.747255  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0927 18:31:26.747284  509591 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0927 18:31:26.774738  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0927 18:31:26.774770  509591 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0927 18:31:26.797150  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.797182  509591 retry.go:31] will retry after 272.658459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.799558  509591 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 18:31:26.799640  509591 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0927 18:31:26.819637  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 18:31:26.825873  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:26.922245  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:26.961797  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.961836  509591 retry.go:31] will retry after 364.407185ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:26.961904  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:26.961915  509591 retry.go:31] will retry after 387.989765ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:27.013429  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.013465  509591 retry.go:31] will retry after 328.106879ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.070812  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:27.150308  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.150350  509591 retry.go:31] will retry after 234.702715ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.326533  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 18:31:27.342279  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:31:27.350753  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:27.385635  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:27.467458  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.467532  509591 retry.go:31] will retry after 515.67337ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:27.479757  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.479879  509591 retry.go:31] will retry after 419.694908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:27.531806  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.531840  509591 retry.go:31] will retry after 291.986235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:27.552320  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.552352  509591 retry.go:31] will retry after 488.985899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.824789  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:27.899862  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:27.902115  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.902148  509591 retry.go:31] will retry after 659.206199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.983440  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 18:31:27.989775  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:27.989811  509591 retry.go:31] will retry after 922.720738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.042113  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:28.063413  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.063459  509591 retry.go:31] will retry after 756.304465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 18:31:28.119405  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.119443  509591 retry.go:31] will retry after 1.066181875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.372353  509591 node_ready.go:53] error getting node "old-k8s-version-313926": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-313926": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 18:31:28.561836  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 18:31:28.641091  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.641126  509591 retry.go:31] will retry after 857.003782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.820429  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 18:31:28.888179  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.888215  509591 retry.go:31] will retry after 997.454894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.913428  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:28.997600  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:28.997632  509591 retry.go:31] will retry after 752.780523ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.186128  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:29.260454  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.260489  509591 retry.go:31] will retry after 938.249348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.499358  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 18:31:29.568115  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.568145  509591 retry.go:31] will retry after 1.633311979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.751478  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:29.829737  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.829778  509591 retry.go:31] will retry after 2.133618926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.886565  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 18:31:29.978219  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:29.978256  509591 retry.go:31] will retry after 1.041656261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:30.199526  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:30.280468  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:30.280506  509591 retry.go:31] will retry after 1.044052704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:30.871753  509591 node_ready.go:53] error getting node "old-k8s-version-313926": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-313926": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 18:31:31.021085  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 18:31:31.096356  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.096402  509591 retry.go:31] will retry after 1.138891587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.202536  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 18:31:31.268773  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.268806  509591 retry.go:31] will retry after 1.893995628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.325007  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 18:31:31.398295  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.398332  509591 retry.go:31] will retry after 4.221031343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:31.964550  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:32.038014  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:32.038052  509591 retry.go:31] will retry after 2.706174953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:32.236477  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 18:31:32.311882  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:32.311919  509591 retry.go:31] will retry after 3.213420687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:32.872598  509591 node_ready.go:53] error getting node "old-k8s-version-313926": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-313926": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 18:31:33.164015  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 18:31:33.266221  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:33.266255  509591 retry.go:31] will retry after 2.61756249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:34.744468  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 18:31:34.920367  509591 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:34.920402  509591 retry.go:31] will retry after 3.782636476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 18:31:35.525944  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 18:31:35.619810  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 18:31:35.884280  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:31:38.703251  509591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:31:43.016139  509591 node_ready.go:49] node "old-k8s-version-313926" has status "Ready":"True"
	I0927 18:31:43.016167  509591 node_ready.go:38] duration metric: took 16.645013581s for node "old-k8s-version-313926" to be "Ready" ...
	I0927 18:31:43.016178  509591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:31:43.293847  509591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-btnnf" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:43.422087  509591 pod_ready.go:93] pod "coredns-74ff55c5b-btnnf" in "kube-system" namespace has status "Ready":"True"
	I0927 18:31:43.422113  509591 pod_ready.go:82] duration metric: took 128.228472ms for pod "coredns-74ff55c5b-btnnf" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:43.422132  509591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:43.496842  509591 pod_ready.go:93] pod "etcd-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"True"
	I0927 18:31:43.496878  509591 pod_ready.go:82] duration metric: took 74.738089ms for pod "etcd-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:43.496894  509591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:44.625779  509591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.005924164s)
	I0927 18:31:44.625823  509591 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-313926"
	I0927 18:31:44.625873  509591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.741553609s)
	I0927 18:31:44.626122  509591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.922842152s)
	I0927 18:31:44.626181  509591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.100189453s)
	I0927 18:31:44.628301  509591 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-313926 addons enable metrics-server
	
	I0927 18:31:44.634182  509591 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0927 18:31:44.636081  509591 addons.go:510] duration metric: took 18.505192824s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0927 18:31:45.508008  509591 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:48.003217  509591 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:50.012515  509591 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:51.505398  509591 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"True"
	I0927 18:31:51.505423  509591 pod_ready.go:82] duration metric: took 8.008520707s for pod "kube-apiserver-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:51.505435  509591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:31:53.511393  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:55.512375  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:57.512810  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:31:59.519506  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:02.012807  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:04.013481  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:06.512376  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:08.512645  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:10.513192  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:13.011348  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:15.011905  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:17.013227  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:19.013865  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:21.512959  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:23.521324  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:26.014189  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:28.511993  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:30.513451  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:33.012574  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:35.512226  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:38.012638  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:40.013347  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:42.512436  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:44.512609  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:47.011860  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:49.013021  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:51.512466  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:53.512929  509591 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:32:55.511539  509591 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"True"
	I0927 18:32:55.511567  509591 pod_ready.go:82] duration metric: took 1m4.006123283s for pod "kube-controller-manager-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:32:55.511579  509591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gccpt" in "kube-system" namespace to be "Ready" ...
	I0927 18:32:55.516812  509591 pod_ready.go:93] pod "kube-proxy-gccpt" in "kube-system" namespace has status "Ready":"True"
	I0927 18:32:55.516840  509591 pod_ready.go:82] duration metric: took 5.253584ms for pod "kube-proxy-gccpt" in "kube-system" namespace to be "Ready" ...
	I0927 18:32:55.516850  509591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:32:57.523923  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:00.023509  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:02.523519  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:05.022899  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:07.025274  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:09.522652  509591 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:10.523865  509591 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace has status "Ready":"True"
	I0927 18:33:10.523946  509591 pod_ready.go:82] duration metric: took 15.007085955s for pod "kube-scheduler-old-k8s-version-313926" in "kube-system" namespace to be "Ready" ...
	I0927 18:33:10.523965  509591 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace to be "Ready" ...
	I0927 18:33:12.530672  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:15.035894  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:17.532656  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:20.031352  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:22.033737  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:24.040779  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:26.530740  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:28.530956  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:31.032501  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:33.037560  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:35.530384  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:37.531432  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:40.039450  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:42.040924  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:44.530781  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:46.531082  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:49.032350  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:51.529937  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:53.531317  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:56.031085  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:33:58.039814  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:00.052093  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:02.529792  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:04.530057  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:06.530943  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:08.531954  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:11.030706  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:13.032575  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:15.037447  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:17.530138  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:19.532299  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:22.039647  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:24.529797  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:26.530591  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:29.034539  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:31.530667  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:34.042001  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:36.531750  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:39.036965  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:41.534132  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:44.047103  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:46.532127  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:49.031606  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:51.036409  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:53.530056  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:55.530698  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:34:57.534712  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:00.080398  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:02.531449  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:05.055067  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:07.530731  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:10.039957  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:12.530243  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:14.530905  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:17.032957  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:19.043961  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:21.531713  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:24.040498  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:26.045404  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:28.530270  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:31.034644  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:33.036457  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:35.050287  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:37.530930  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:40.056362  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:42.530783  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:45.112880  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:47.531185  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:50.047322  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:52.530484  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:54.530608  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:57.035277  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:35:59.036361  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:01.538382  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:04.034982  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:06.049165  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:08.530758  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:10.531552  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:13.035414  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:15.066711  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:17.530501  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:20.044543  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:22.530898  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:25.033648  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:27.036177  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:29.534219  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:31.535422  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:34.036455  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:36.041153  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:38.531470  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:41.031605  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:43.036952  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:45.093802  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:47.530859  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:49.531163  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:52.032612  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:54.035546  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:56.044993  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:36:58.048320  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:00.214662  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:02.530965  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:04.531322  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:07.031043  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:09.040518  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:10.530624  509591 pod_ready.go:82] duration metric: took 4m0.006642523s for pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace to be "Ready" ...
	E0927 18:37:10.530651  509591 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 18:37:10.530660  509591 pod_ready.go:39] duration metric: took 5m27.5144681s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:37:10.530676  509591 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:37:10.530709  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 18:37:10.530777  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 18:37:10.568807  509591 cri.go:89] found id: "85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:10.568833  509591 cri.go:89] found id: "4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:10.568839  509591 cri.go:89] found id: ""
	I0927 18:37:10.568847  509591 logs.go:276] 2 containers: [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631]
	I0927 18:37:10.568923  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.572694  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.575917  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 18:37:10.575991  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 18:37:10.621655  509591 cri.go:89] found id: "e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:10.621682  509591 cri.go:89] found id: "67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:10.621687  509591 cri.go:89] found id: ""
	I0927 18:37:10.621695  509591 logs.go:276] 2 containers: [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a]
	I0927 18:37:10.621752  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.625325  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.628506  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 18:37:10.628591  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 18:37:10.669907  509591 cri.go:89] found id: "ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:10.669930  509591 cri.go:89] found id: "3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:10.669935  509591 cri.go:89] found id: ""
	I0927 18:37:10.669944  509591 logs.go:276] 2 containers: [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c]
	I0927 18:37:10.670028  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.673878  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.677166  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 18:37:10.677305  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 18:37:10.715097  509591 cri.go:89] found id: "51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:10.715161  509591 cri.go:89] found id: "9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:10.715182  509591 cri.go:89] found id: ""
	I0927 18:37:10.715207  509591 logs.go:276] 2 containers: [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289]
	I0927 18:37:10.715291  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.718970  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.722461  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 18:37:10.722542  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 18:37:10.766417  509591 cri.go:89] found id: "fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:10.766442  509591 cri.go:89] found id: "4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:10.766448  509591 cri.go:89] found id: ""
	I0927 18:37:10.766455  509591 logs.go:276] 2 containers: [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd]
	I0927 18:37:10.766543  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.770255  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.774150  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 18:37:10.774269  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 18:37:10.820026  509591 cri.go:89] found id: "a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:10.820053  509591 cri.go:89] found id: "0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:10.820058  509591 cri.go:89] found id: ""
	I0927 18:37:10.820066  509591 logs.go:276] 2 containers: [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055]
	I0927 18:37:10.820154  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.823895  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.827543  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 18:37:10.827628  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 18:37:10.867993  509591 cri.go:89] found id: "6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:10.868018  509591 cri.go:89] found id: "0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:10.868026  509591 cri.go:89] found id: ""
	I0927 18:37:10.868034  509591 logs.go:276] 2 containers: [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d]
	I0927 18:37:10.868115  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.871642  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.875460  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0927 18:37:10.875536  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 18:37:10.912536  509591 cri.go:89] found id: "e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:10.912559  509591 cri.go:89] found id: "86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:10.912565  509591 cri.go:89] found id: ""
	I0927 18:37:10.912572  509591 logs.go:276] 2 containers: [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef]
	I0927 18:37:10.912640  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.916243  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.919651  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 18:37:10.919727  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 18:37:10.960688  509591 cri.go:89] found id: "b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:10.960761  509591 cri.go:89] found id: ""
	I0927 18:37:10.960775  509591 logs.go:276] 1 containers: [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a]
	I0927 18:37:10.960850  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.964327  509591 logs.go:123] Gathering logs for etcd [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab] ...
	I0927 18:37:10.964351  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:11.007263  509591 logs.go:123] Gathering logs for coredns [3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c] ...
	I0927 18:37:11.007290  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:11.052851  509591 logs.go:123] Gathering logs for kube-proxy [4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd] ...
	I0927 18:37:11.052882  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:11.093894  509591 logs.go:123] Gathering logs for kube-controller-manager [0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055] ...
	I0927 18:37:11.093924  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:11.155524  509591 logs.go:123] Gathering logs for kindnet [0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d] ...
	I0927 18:37:11.155561  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:11.194499  509591 logs.go:123] Gathering logs for storage-provisioner [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0] ...
	I0927 18:37:11.194532  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:11.233034  509591 logs.go:123] Gathering logs for kubernetes-dashboard [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a] ...
	I0927 18:37:11.233060  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:11.279344  509591 logs.go:123] Gathering logs for describe nodes ...
	I0927 18:37:11.279374  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 18:37:11.439220  509591 logs.go:123] Gathering logs for kube-controller-manager [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b] ...
	I0927 18:37:11.439252  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:11.499790  509591 logs.go:123] Gathering logs for kindnet [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268] ...
	I0927 18:37:11.499831  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:11.545117  509591 logs.go:123] Gathering logs for kube-apiserver [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151] ...
	I0927 18:37:11.545150  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:11.604030  509591 logs.go:123] Gathering logs for kube-apiserver [4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631] ...
	I0927 18:37:11.604064  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:11.663043  509591 logs.go:123] Gathering logs for etcd [67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a] ...
	I0927 18:37:11.663091  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:11.709570  509591 logs.go:123] Gathering logs for kube-scheduler [9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289] ...
	I0927 18:37:11.709645  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:11.751374  509591 logs.go:123] Gathering logs for kube-proxy [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67] ...
	I0927 18:37:11.751409  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:11.789842  509591 logs.go:123] Gathering logs for container status ...
	I0927 18:37:11.789872  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 18:37:11.835768  509591 logs.go:123] Gathering logs for kubelet ...
	I0927 18:37:11.835798  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 18:37:11.888930  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:42 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.963284     657 reflector.go:138] object-"default"/"default-token-ncsvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ncsvq" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889192  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980574     657 reflector.go:138] object-"kube-system"/"kindnet-token-4c6k8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4c6k8" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889420  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980841     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-msgkf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-msgkf" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889655  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.981364     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hdwrd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hdwrd" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.891900  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043122     657 reflector.go:138] object-"kube-system"/"metrics-server-token-m6vt7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-m6vt7" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.892107  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043201     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.893711  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.071223     657 reflector.go:138] object-"kube-system"/"coredns-token-nlnv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nlnv6" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.893914  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.008976     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.902462  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:45 old-k8s-version-313926 kubelet[657]: E0927 18:31:45.917706     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.902652  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:46 old-k8s-version-313926 kubelet[657]: E0927 18:31:46.301139     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.905462  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:59 old-k8s-version-313926 kubelet[657]: E0927 18:31:59.162434     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.907410  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:06 old-k8s-version-313926 kubelet[657]: E0927 18:32:06.395938     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.907874  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:07 old-k8s-version-313926 kubelet[657]: E0927 18:32:07.401770     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.908228  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:09 old-k8s-version-313926 kubelet[657]: E0927 18:32:09.667132     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.908415  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:13 old-k8s-version-313926 kubelet[657]: E0927 18:32:13.135325     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.909187  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:16 old-k8s-version-313926 kubelet[657]: E0927 18:32:16.435963     657 pod_workers.go:191] Error syncing pod 8d21b02b-38af-4ef7-a435-3f3de26186e1 ("storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"
	W0927 18:37:11.910135  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:24 old-k8s-version-313926 kubelet[657]: E0927 18:32:24.532611     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.912589  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:27 old-k8s-version-313926 kubelet[657]: E0927 18:32:27.146345     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.912918  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:29 old-k8s-version-313926 kubelet[657]: E0927 18:32:29.667510     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.913234  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:40 old-k8s-version-313926 kubelet[657]: E0927 18:32:40.135454     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.913833  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:44 old-k8s-version-313926 kubelet[657]: E0927 18:32:44.605072     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914162  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:49 old-k8s-version-313926 kubelet[657]: E0927 18:32:49.667087     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914348  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:55 old-k8s-version-313926 kubelet[657]: E0927 18:32:55.138594     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.914708  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:04 old-k8s-version-313926 kubelet[657]: E0927 18:33:04.135348     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914896  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:06 old-k8s-version-313926 kubelet[657]: E0927 18:33:06.135559     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.915232  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:15 old-k8s-version-313926 kubelet[657]: E0927 18:33:15.137034     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.917682  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:18 old-k8s-version-313926 kubelet[657]: E0927 18:33:18.150090     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.918277  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:29 old-k8s-version-313926 kubelet[657]: E0927 18:33:29.728038     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.918473  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:33 old-k8s-version-313926 kubelet[657]: E0927 18:33:33.135093     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.918801  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:39 old-k8s-version-313926 kubelet[657]: E0927 18:33:39.667115     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.918986  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:46 old-k8s-version-313926 kubelet[657]: E0927 18:33:46.139080     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.919314  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:54 old-k8s-version-313926 kubelet[657]: E0927 18:33:54.135098     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.919500  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:57 old-k8s-version-313926 kubelet[657]: E0927 18:33:57.135120     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.919692  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:08 old-k8s-version-313926 kubelet[657]: E0927 18:34:08.136761     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.920020  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:09 old-k8s-version-313926 kubelet[657]: E0927 18:34:09.134623     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.920349  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:20 old-k8s-version-313926 kubelet[657]: E0927 18:34:20.139455     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.920534  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:22 old-k8s-version-313926 kubelet[657]: E0927 18:34:22.138347     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.920863  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:32 old-k8s-version-313926 kubelet[657]: E0927 18:34:32.137996     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.921048  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:34 old-k8s-version-313926 kubelet[657]: E0927 18:34:34.136489     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.921384  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:45 old-k8s-version-313926 kubelet[657]: E0927 18:34:45.146030     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.923816  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:46 old-k8s-version-313926 kubelet[657]: E0927 18:34:46.145411     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.924407  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:56 old-k8s-version-313926 kubelet[657]: E0927 18:34:56.970275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.924595  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:57 old-k8s-version-313926 kubelet[657]: E0927 18:34:57.135184     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.924924  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:59 old-k8s-version-313926 kubelet[657]: E0927 18:34:59.667001     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.925111  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:11 old-k8s-version-313926 kubelet[657]: E0927 18:35:11.135708     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.925442  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:12 old-k8s-version-313926 kubelet[657]: E0927 18:35:12.134696     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.925661  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:24 old-k8s-version-313926 kubelet[657]: E0927 18:35:24.137383     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.925991  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:25 old-k8s-version-313926 kubelet[657]: E0927 18:35:25.134891     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.926178  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:35 old-k8s-version-313926 kubelet[657]: E0927 18:35:35.135177     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.926517  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:39 old-k8s-version-313926 kubelet[657]: E0927 18:35:39.135338     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.926847  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.136131     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.927034  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.135330     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.927218  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:01 old-k8s-version-313926 kubelet[657]: E0927 18:36:01.135157     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.927548  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:03 old-k8s-version-313926 kubelet[657]: E0927 18:36:03.135275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.927736  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:15 old-k8s-version-313926 kubelet[657]: E0927 18:36:15.145862     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.928062  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: E0927 18:36:17.134586     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.928248  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:26 old-k8s-version-313926 kubelet[657]: E0927 18:36:26.134898     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.928597  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: E0927 18:36:31.135626     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.928787  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.929115  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.929372  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.929707  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.929898  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:11.929910  509591 logs.go:123] Gathering logs for coredns [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87] ...
	I0927 18:37:11.929928  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:11.967794  509591 logs.go:123] Gathering logs for kube-scheduler [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546] ...
	I0927 18:37:11.967829  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:12.007234  509591 logs.go:123] Gathering logs for storage-provisioner [86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef] ...
	I0927 18:37:12.007265  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:12.058606  509591 logs.go:123] Gathering logs for containerd ...
	I0927 18:37:12.058636  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 18:37:12.125981  509591 logs.go:123] Gathering logs for dmesg ...
	I0927 18:37:12.126034  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 18:37:12.168151  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:12.168176  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 18:37:12.168260  509591 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 18:37:12.168290  509591 out.go:270]   Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:12.168305  509591 out.go:270]   Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	  Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:12.168314  509591 out.go:270]   Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:12.168337  509591 out.go:270]   Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	  Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:12.168350  509591 out.go:270]   Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:12.168358  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:12.168371  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:22.169134  509591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:37:22.183163  509591 api_server.go:72] duration metric: took 5m56.052597501s to wait for apiserver process to appear ...
	I0927 18:37:22.183185  509591 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:37:22.183221  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 18:37:22.183283  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 18:37:22.243147  509591 cri.go:89] found id: "85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:22.243171  509591 cri.go:89] found id: "4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:22.243177  509591 cri.go:89] found id: ""
	I0927 18:37:22.243196  509591 logs.go:276] 2 containers: [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631]
	I0927 18:37:22.243254  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.248620  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.253646  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 18:37:22.253731  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 18:37:22.320532  509591 cri.go:89] found id: "e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:22.320616  509591 cri.go:89] found id: "67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:22.320642  509591 cri.go:89] found id: ""
	I0927 18:37:22.320681  509591 logs.go:276] 2 containers: [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a]
	I0927 18:37:22.320769  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.325468  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.330520  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 18:37:22.330601  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 18:37:22.385427  509591 cri.go:89] found id: "ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:22.385456  509591 cri.go:89] found id: "3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:22.385466  509591 cri.go:89] found id: ""
	I0927 18:37:22.385474  509591 logs.go:276] 2 containers: [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c]
	I0927 18:37:22.385533  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.390761  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.395352  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 18:37:22.395507  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 18:37:22.450433  509591 cri.go:89] found id: "51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:22.450459  509591 cri.go:89] found id: "9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:22.450464  509591 cri.go:89] found id: ""
	I0927 18:37:22.450505  509591 logs.go:276] 2 containers: [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289]
	I0927 18:37:22.450589  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.455198  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.459233  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 18:37:22.459360  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 18:37:22.548138  509591 cri.go:89] found id: "fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:22.548177  509591 cri.go:89] found id: "4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:22.548183  509591 cri.go:89] found id: ""
	I0927 18:37:22.548191  509591 logs.go:276] 2 containers: [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd]
	I0927 18:37:22.548280  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.552629  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.557112  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 18:37:22.557228  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 18:37:22.608500  509591 cri.go:89] found id: "a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:22.608525  509591 cri.go:89] found id: "0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:22.608541  509591 cri.go:89] found id: ""
	I0927 18:37:22.608619  509591 logs.go:276] 2 containers: [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055]
	I0927 18:37:22.608701  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.613019  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.618228  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 18:37:22.618331  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 18:37:22.667157  509591 cri.go:89] found id: "6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:22.667180  509591 cri.go:89] found id: "0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:22.667197  509591 cri.go:89] found id: ""
	I0927 18:37:22.667219  509591 logs.go:276] 2 containers: [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d]
	I0927 18:37:22.667300  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.671396  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.675112  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 18:37:22.675237  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 18:37:22.724970  509591 cri.go:89] found id: "b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:22.724996  509591 cri.go:89] found id: ""
	I0927 18:37:22.725005  509591 logs.go:276] 1 containers: [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a]
	I0927 18:37:22.725087  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.729152  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0927 18:37:22.729284  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 18:37:22.778371  509591 cri.go:89] found id: "e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:22.778404  509591 cri.go:89] found id: "86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:22.778410  509591 cri.go:89] found id: ""
	I0927 18:37:22.778417  509591 logs.go:276] 2 containers: [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef]
	I0927 18:37:22.778522  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.782527  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.786576  509591 logs.go:123] Gathering logs for kube-apiserver [4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631] ...
	I0927 18:37:22.786629  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:22.860905  509591 logs.go:123] Gathering logs for etcd [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab] ...
	I0927 18:37:22.860942  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:22.920203  509591 logs.go:123] Gathering logs for etcd [67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a] ...
	I0927 18:37:22.920235  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:22.990981  509591 logs.go:123] Gathering logs for kube-scheduler [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546] ...
	I0927 18:37:22.991028  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:23.049744  509591 logs.go:123] Gathering logs for storage-provisioner [86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef] ...
	I0927 18:37:23.049775  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:23.102110  509591 logs.go:123] Gathering logs for describe nodes ...
	I0927 18:37:23.102140  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 18:37:23.305600  509591 logs.go:123] Gathering logs for coredns [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87] ...
	I0927 18:37:23.305636  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:23.360521  509591 logs.go:123] Gathering logs for kube-scheduler [9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289] ...
	I0927 18:37:23.360556  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:23.421593  509591 logs.go:123] Gathering logs for kube-proxy [4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd] ...
	I0927 18:37:23.421635  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:23.479620  509591 logs.go:123] Gathering logs for kube-controller-manager [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b] ...
	I0927 18:37:23.479649  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:23.555578  509591 logs.go:123] Gathering logs for storage-provisioner [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0] ...
	I0927 18:37:23.555616  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:23.604573  509591 logs.go:123] Gathering logs for containerd ...
	I0927 18:37:23.604603  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 18:37:23.672862  509591 logs.go:123] Gathering logs for kubelet ...
	I0927 18:37:23.672900  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 18:37:23.727037  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:42 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.963284     657 reflector.go:138] object-"default"/"default-token-ncsvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ncsvq" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727304  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980574     657 reflector.go:138] object-"kube-system"/"kindnet-token-4c6k8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4c6k8" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727547  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980841     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-msgkf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-msgkf" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727797  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.981364     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hdwrd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hdwrd" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.730468  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043122     657 reflector.go:138] object-"kube-system"/"metrics-server-token-m6vt7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-m6vt7" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.730716  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043201     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.732343  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.071223     657 reflector.go:138] object-"kube-system"/"coredns-token-nlnv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nlnv6" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.732566  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.008976     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.741449  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:45 old-k8s-version-313926 kubelet[657]: E0927 18:31:45.917706     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.741694  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:46 old-k8s-version-313926 kubelet[657]: E0927 18:31:46.301139     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.744588  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:59 old-k8s-version-313926 kubelet[657]: E0927 18:31:59.162434     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.746591  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:06 old-k8s-version-313926 kubelet[657]: E0927 18:32:06.395938     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747093  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:07 old-k8s-version-313926 kubelet[657]: E0927 18:32:07.401770     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747485  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:09 old-k8s-version-313926 kubelet[657]: E0927 18:32:09.667132     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747699  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:13 old-k8s-version-313926 kubelet[657]: E0927 18:32:13.135325     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.748487  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:16 old-k8s-version-313926 kubelet[657]: E0927 18:32:16.435963     657 pod_workers.go:191] Error syncing pod 8d21b02b-38af-4ef7-a435-3f3de26186e1 ("storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"
	W0927 18:37:23.749437  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:24 old-k8s-version-313926 kubelet[657]: E0927 18:32:24.532611     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.752028  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:27 old-k8s-version-313926 kubelet[657]: E0927 18:32:27.146345     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.752431  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:29 old-k8s-version-313926 kubelet[657]: E0927 18:32:29.667510     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.752776  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:40 old-k8s-version-313926 kubelet[657]: E0927 18:32:40.135454     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.753407  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:44 old-k8s-version-313926 kubelet[657]: E0927 18:32:44.605072     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.753759  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:49 old-k8s-version-313926 kubelet[657]: E0927 18:32:49.667087     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.753968  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:55 old-k8s-version-313926 kubelet[657]: E0927 18:32:55.138594     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.754321  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:04 old-k8s-version-313926 kubelet[657]: E0927 18:33:04.135348     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.754529  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:06 old-k8s-version-313926 kubelet[657]: E0927 18:33:06.135559     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.754878  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:15 old-k8s-version-313926 kubelet[657]: E0927 18:33:15.137034     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.757356  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:18 old-k8s-version-313926 kubelet[657]: E0927 18:33:18.150090     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.758049  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:29 old-k8s-version-313926 kubelet[657]: E0927 18:33:29.728038     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.758334  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:33 old-k8s-version-313926 kubelet[657]: E0927 18:33:33.135093     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.758692  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:39 old-k8s-version-313926 kubelet[657]: E0927 18:33:39.667115     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.758905  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:46 old-k8s-version-313926 kubelet[657]: E0927 18:33:46.139080     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.759328  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:54 old-k8s-version-313926 kubelet[657]: E0927 18:33:54.135098     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.759538  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:57 old-k8s-version-313926 kubelet[657]: E0927 18:33:57.135120     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.759746  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:08 old-k8s-version-313926 kubelet[657]: E0927 18:34:08.136761     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.760112  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:09 old-k8s-version-313926 kubelet[657]: E0927 18:34:09.134623     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.760470  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:20 old-k8s-version-313926 kubelet[657]: E0927 18:34:20.139455     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.760674  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:22 old-k8s-version-313926 kubelet[657]: E0927 18:34:22.138347     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.761022  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:32 old-k8s-version-313926 kubelet[657]: E0927 18:34:32.137996     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.761244  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:34 old-k8s-version-313926 kubelet[657]: E0927 18:34:34.136489     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.761601  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:45 old-k8s-version-313926 kubelet[657]: E0927 18:34:45.146030     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.764181  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:46 old-k8s-version-313926 kubelet[657]: E0927 18:34:46.145411     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.764797  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:56 old-k8s-version-313926 kubelet[657]: E0927 18:34:56.970275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.765007  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:57 old-k8s-version-313926 kubelet[657]: E0927 18:34:57.135184     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.765368  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:59 old-k8s-version-313926 kubelet[657]: E0927 18:34:59.667001     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.765575  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:11 old-k8s-version-313926 kubelet[657]: E0927 18:35:11.135708     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.765923  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:12 old-k8s-version-313926 kubelet[657]: E0927 18:35:12.134696     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.766131  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:24 old-k8s-version-313926 kubelet[657]: E0927 18:35:24.137383     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.766482  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:25 old-k8s-version-313926 kubelet[657]: E0927 18:35:25.134891     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.766693  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:35 old-k8s-version-313926 kubelet[657]: E0927 18:35:35.135177     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.767040  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:39 old-k8s-version-313926 kubelet[657]: E0927 18:35:39.135338     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.767389  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.136131     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.767595  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.135330     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.767803  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:01 old-k8s-version-313926 kubelet[657]: E0927 18:36:01.135157     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.768152  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:03 old-k8s-version-313926 kubelet[657]: E0927 18:36:03.135275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.768369  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:15 old-k8s-version-313926 kubelet[657]: E0927 18:36:15.145862     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.768745  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: E0927 18:36:17.134586     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.768953  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:26 old-k8s-version-313926 kubelet[657]: E0927 18:36:26.134898     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.769350  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: E0927 18:36:31.135626     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.769557  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.769926  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.770150  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.770548  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.770740  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.771067  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.771252  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:23.771262  509591 logs.go:123] Gathering logs for coredns [3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c] ...
	I0927 18:37:23.771279  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:23.819339  509591 logs.go:123] Gathering logs for kube-controller-manager [0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055] ...
	I0927 18:37:23.819370  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:23.873786  509591 logs.go:123] Gathering logs for kindnet [0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d] ...
	I0927 18:37:23.873824  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:23.923360  509591 logs.go:123] Gathering logs for dmesg ...
	I0927 18:37:23.923393  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 18:37:23.939919  509591 logs.go:123] Gathering logs for kube-apiserver [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151] ...
	I0927 18:37:23.939949  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:24.005877  509591 logs.go:123] Gathering logs for kube-proxy [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67] ...
	I0927 18:37:24.005914  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:24.067764  509591 logs.go:123] Gathering logs for kindnet [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268] ...
	I0927 18:37:24.067873  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:24.163533  509591 logs.go:123] Gathering logs for kubernetes-dashboard [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a] ...
	I0927 18:37:24.163611  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:24.237049  509591 logs.go:123] Gathering logs for container status ...
	I0927 18:37:24.237130  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 18:37:24.338852  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:24.338929  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 18:37:24.339073  509591 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 18:37:24.339236  509591 out.go:270]   Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:24.339280  509591 out.go:270]   Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	  Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:24.339390  509591 out.go:270]   Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:24.339426  509591 out.go:270]   Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	  Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:24.339488  509591 out.go:270]   Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:24.339524  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:24.339560  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:34.340625  509591 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0927 18:37:34.350621  509591 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0927 18:37:34.352786  509591 out.go:201] 
	W0927 18:37:34.354551  509591 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0927 18:37:34.354706  509591 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0927 18:37:34.354736  509591 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0927 18:37:34.354747  509591 out.go:270] * 
	* 
	W0927 18:37:34.356466  509591 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 18:37:34.358492  509591 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-313926 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-313926
helpers_test.go:235: (dbg) docker inspect old-k8s-version-313926:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991",
	        "Created": "2024-09-27T18:28:08.665533036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509795,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T18:31:18.814152051Z",
	            "FinishedAt": "2024-09-27T18:31:17.885606395Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991/hostname",
	        "HostsPath": "/var/lib/docker/containers/65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991/hosts",
	        "LogPath": "/var/lib/docker/containers/65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991/65d28237a9405dd654a906254e8082cb82a2cafd708422179457edf7c770d991-json.log",
	        "Name": "/old-k8s-version-313926",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-313926:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-313926",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bec1001b7cf2be92ce087427ce2d18a542e1d65e9a00d63c17b71489a1ca4727-init/diff:/var/lib/docker/overlay2/a37a697d35bc9dd6b22fe821f055b93d8ecad36dc406167b9eb9ad78951bada0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bec1001b7cf2be92ce087427ce2d18a542e1d65e9a00d63c17b71489a1ca4727/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bec1001b7cf2be92ce087427ce2d18a542e1d65e9a00d63c17b71489a1ca4727/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bec1001b7cf2be92ce087427ce2d18a542e1d65e9a00d63c17b71489a1ca4727/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-313926",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-313926/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-313926",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-313926",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-313926",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26c7a33dea4e059bc9b952fffece69a9a9375ce6e15228e6d9fcf2a2a5eac107",
	            "SandboxKey": "/var/run/docker/netns/26c7a33dea4e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-313926": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8c42e7f25a6f2e7f9ef745cec78c2d3c9f4c1fb1c7fc50fbf3c4f8312a3af0fe",
	                    "EndpointID": "ce8d54de3660e45b97ad50d7d1c853874e469cb3f5ff739b28af1b1fae9b8336",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-313926",
	                        "65d28237a940"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-313926 -n old-k8s-version-313926
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-313926 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-313926 logs -n 25: (2.412188914s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-759007                              | cert-expiration-759007   | jenkins | v1.34.0 | 27 Sep 24 18:26 UTC | 27 Sep 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-641482                               | force-systemd-env-641482 | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-641482                            | force-systemd-env-641482 | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	| start   | -p cert-options-933427                                 | cert-options-933427      | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-933427 ssh                                | cert-options-933427      | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-933427 -- sudo                         | cert-options-933427      | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-933427                                 | cert-options-933427      | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:28 UTC |
	| start   | -p old-k8s-version-313926                              | old-k8s-version-313926   | jenkins | v1.34.0 | 27 Sep 24 18:28 UTC | 27 Sep 24 18:30 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-759007                              | cert-expiration-759007   | jenkins | v1.34.0 | 27 Sep 24 18:30 UTC | 27 Sep 24 18:30 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-759007                              | cert-expiration-759007   | jenkins | v1.34.0 | 27 Sep 24 18:30 UTC | 27 Sep 24 18:30 UTC |
	| start   | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:30 UTC | 27 Sep 24 18:31 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-313926        | old-k8s-version-313926   | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC | 27 Sep 24 18:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-313926                              | old-k8s-version-313926   | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC | 27 Sep 24 18:31 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-313926             | old-k8s-version-313926   | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC | 27 Sep 24 18:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-313926                              | old-k8s-version-313926   | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-446590             | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC | 27 Sep 24 18:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:31 UTC | 27 Sep 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-446590                  | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:32 UTC | 27 Sep 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:32 UTC | 27 Sep 24 18:36 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-446590 image list                           | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:37 UTC |
	| delete  | -p no-preload-446590                                   | no-preload-446590        | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC | 27 Sep 24 18:37 UTC |
	| start   | -p embed-certs-437083                                  | embed-certs-437083       | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:37:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:37:01.102593  519752 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:37:01.102781  519752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:01.102794  519752 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:01.102801  519752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:01.103073  519752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:37:01.103506  519752 out.go:352] Setting JSON to false
	I0927 18:37:01.104588  519752 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8372,"bootTime":1727453849,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 18:37:01.104663  519752 start.go:139] virtualization:  
	I0927 18:37:01.107910  519752 out.go:177] * [embed-certs-437083] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 18:37:01.110637  519752 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:37:01.110773  519752 notify.go:220] Checking for updates...
	I0927 18:37:01.114457  519752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:37:01.116268  519752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 18:37:01.118065  519752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 18:37:01.120027  519752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 18:37:01.122443  519752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:37:01.125153  519752 config.go:182] Loaded profile config "old-k8s-version-313926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0927 18:37:01.125312  519752 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:37:01.148161  519752 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 18:37:01.148317  519752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:37:01.214592  519752 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 18:37:01.190556691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:37:01.214710  519752 docker.go:318] overlay module found
	I0927 18:37:01.218269  519752 out.go:177] * Using the docker driver based on user configuration
	I0927 18:37:01.220152  519752 start.go:297] selected driver: docker
	I0927 18:37:01.220176  519752 start.go:901] validating driver "docker" against <nil>
	I0927 18:37:01.220199  519752 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:37:01.220862  519752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:37:01.293941  519752 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 18:37:01.283679556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:37:01.294203  519752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 18:37:01.294438  519752 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:37:01.296426  519752 out.go:177] * Using Docker driver with root privileges
	I0927 18:37:01.298028  519752 cni.go:84] Creating CNI manager for ""
	I0927 18:37:01.298117  519752 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 18:37:01.298130  519752 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 18:37:01.298209  519752 start.go:340] cluster config:
	{Name:embed-certs-437083 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-437083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:01.299978  519752 out.go:177] * Starting "embed-certs-437083" primary control-plane node in "embed-certs-437083" cluster
	I0927 18:37:01.301629  519752 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 18:37:01.303102  519752 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 18:37:01.304790  519752 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 18:37:01.304848  519752 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 18:37:01.304860  519752 cache.go:56] Caching tarball of preloaded images
	I0927 18:37:01.304882  519752 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 18:37:01.304943  519752 preload.go:172] Found /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0927 18:37:01.304953  519752 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0927 18:37:01.305069  519752 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/config.json ...
	I0927 18:37:01.305087  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/config.json: {Name:mkae34e8fd66a61253f25dabcb0bccdc3d0b6c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:01.324978  519752 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0927 18:37:01.325006  519752 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0927 18:37:01.325019  519752 cache.go:194] Successfully downloaded all kic artifacts
	I0927 18:37:01.325054  519752 start.go:360] acquireMachinesLock for embed-certs-437083: {Name:mk00551e95293ebeedf3b82c4fd60c9c6be1172c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:01.325564  519752 start.go:364] duration metric: took 485.599µs to acquireMachinesLock for "embed-certs-437083"
	I0927 18:37:01.325608  519752 start.go:93] Provisioning new machine with config: &{Name:embed-certs-437083 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-437083 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0927 18:37:01.325685  519752 start.go:125] createHost starting for "" (driver="docker")
	I0927 18:37:00.214662  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:02.530965  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:01.330120  519752 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0927 18:37:01.330386  519752 start.go:159] libmachine.API.Create for "embed-certs-437083" (driver="docker")
	I0927 18:37:01.330428  519752 client.go:168] LocalClient.Create starting
	I0927 18:37:01.330517  519752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem
	I0927 18:37:01.330550  519752 main.go:141] libmachine: Decoding PEM data...
	I0927 18:37:01.330572  519752 main.go:141] libmachine: Parsing certificate...
	I0927 18:37:01.330621  519752 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem
	I0927 18:37:01.330638  519752 main.go:141] libmachine: Decoding PEM data...
	I0927 18:37:01.330647  519752 main.go:141] libmachine: Parsing certificate...
	I0927 18:37:01.331063  519752 cli_runner.go:164] Run: docker network inspect embed-certs-437083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 18:37:01.348374  519752 cli_runner.go:211] docker network inspect embed-certs-437083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 18:37:01.348486  519752 network_create.go:284] running [docker network inspect embed-certs-437083] to gather additional debugging logs...
	I0927 18:37:01.348504  519752 cli_runner.go:164] Run: docker network inspect embed-certs-437083
	W0927 18:37:01.362935  519752 cli_runner.go:211] docker network inspect embed-certs-437083 returned with exit code 1
	I0927 18:37:01.362974  519752 network_create.go:287] error running [docker network inspect embed-certs-437083]: docker network inspect embed-certs-437083: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-437083 not found
	I0927 18:37:01.362995  519752 network_create.go:289] output of [docker network inspect embed-certs-437083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-437083 not found
	
	** /stderr **
	I0927 18:37:01.363129  519752 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 18:37:01.383268  519752 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-adf1e8729b5f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0f:46:76:c6} reservation:<nil>}
	I0927 18:37:01.383811  519752 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f9be558b378b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6c:7e:cf:e9} reservation:<nil>}
	I0927 18:37:01.384160  519752 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b1a4f6cb4063 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:91:19:98:e3} reservation:<nil>}
	I0927 18:37:01.384565  519752 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8c42e7f25a6f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f3:fb:23:24} reservation:<nil>}
	I0927 18:37:01.385074  519752 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400189f290}
	I0927 18:37:01.385099  519752 network_create.go:124] attempt to create docker network embed-certs-437083 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0927 18:37:01.385158  519752 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-437083 embed-certs-437083
	I0927 18:37:01.458350  519752 network_create.go:108] docker network embed-certs-437083 192.168.85.0/24 created
	I0927 18:37:01.458391  519752 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-437083" container
	I0927 18:37:01.458479  519752 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 18:37:01.473751  519752 cli_runner.go:164] Run: docker volume create embed-certs-437083 --label name.minikube.sigs.k8s.io=embed-certs-437083 --label created_by.minikube.sigs.k8s.io=true
	I0927 18:37:01.491638  519752 oci.go:103] Successfully created a docker volume embed-certs-437083
	I0927 18:37:01.491738  519752 cli_runner.go:164] Run: docker run --rm --name embed-certs-437083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-437083 --entrypoint /usr/bin/test -v embed-certs-437083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 18:37:02.126894  519752 oci.go:107] Successfully prepared a docker volume embed-certs-437083
	I0927 18:37:02.126937  519752 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 18:37:02.126965  519752 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 18:37:02.127038  519752 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-437083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 18:37:04.531322  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:07.031043  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:08.455112  519752 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-437083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (6.328034393s)
	I0927 18:37:08.455148  519752 kic.go:203] duration metric: took 6.328187756s to extract preloaded images to volume ...
	W0927 18:37:08.455297  519752 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 18:37:08.455434  519752 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 18:37:08.507200  519752 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-437083 --name embed-certs-437083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-437083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-437083 --network embed-certs-437083 --ip 192.168.85.2 --volume embed-certs-437083:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 18:37:08.860164  519752 cli_runner.go:164] Run: docker container inspect embed-certs-437083 --format={{.State.Running}}
	I0927 18:37:08.883041  519752 cli_runner.go:164] Run: docker container inspect embed-certs-437083 --format={{.State.Status}}
	I0927 18:37:08.913772  519752 cli_runner.go:164] Run: docker exec embed-certs-437083 stat /var/lib/dpkg/alternatives/iptables
	I0927 18:37:08.980696  519752 oci.go:144] the created container "embed-certs-437083" has a running status.
	I0927 18:37:08.980734  519752 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa...
	I0927 18:37:09.348424  519752 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 18:37:09.383891  519752 cli_runner.go:164] Run: docker container inspect embed-certs-437083 --format={{.State.Status}}
	I0927 18:37:09.406046  519752 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 18:37:09.406067  519752 kic_runner.go:114] Args: [docker exec --privileged embed-certs-437083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 18:37:09.471553  519752 cli_runner.go:164] Run: docker container inspect embed-certs-437083 --format={{.State.Status}}
	I0927 18:37:09.509357  519752 machine.go:93] provisionDockerMachine start ...
	I0927 18:37:09.509461  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:09.545397  519752 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:09.545710  519752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0927 18:37:09.545725  519752 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:37:09.546729  519752 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0927 18:37:09.040518  509591 pod_ready.go:103] pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:10.530624  509591 pod_ready.go:82] duration metric: took 4m0.006642523s for pod "metrics-server-9975d5f86-cft95" in "kube-system" namespace to be "Ready" ...
	E0927 18:37:10.530651  509591 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 18:37:10.530660  509591 pod_ready.go:39] duration metric: took 5m27.5144681s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:37:10.530676  509591 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:37:10.530709  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 18:37:10.530777  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 18:37:10.568807  509591 cri.go:89] found id: "85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:10.568833  509591 cri.go:89] found id: "4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:10.568839  509591 cri.go:89] found id: ""
	I0927 18:37:10.568847  509591 logs.go:276] 2 containers: [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631]
	I0927 18:37:10.568923  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.572694  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.575917  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 18:37:10.575991  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 18:37:10.621655  509591 cri.go:89] found id: "e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:10.621682  509591 cri.go:89] found id: "67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:10.621687  509591 cri.go:89] found id: ""
	I0927 18:37:10.621695  509591 logs.go:276] 2 containers: [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a]
	I0927 18:37:10.621752  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.625325  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.628506  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 18:37:10.628591  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 18:37:10.669907  509591 cri.go:89] found id: "ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:10.669930  509591 cri.go:89] found id: "3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:10.669935  509591 cri.go:89] found id: ""
	I0927 18:37:10.669944  509591 logs.go:276] 2 containers: [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c]
	I0927 18:37:10.670028  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.673878  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.677166  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 18:37:10.677305  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 18:37:10.715097  509591 cri.go:89] found id: "51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:10.715161  509591 cri.go:89] found id: "9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:10.715182  509591 cri.go:89] found id: ""
	I0927 18:37:10.715207  509591 logs.go:276] 2 containers: [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289]
	I0927 18:37:10.715291  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.718970  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.722461  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 18:37:10.722542  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 18:37:10.766417  509591 cri.go:89] found id: "fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:10.766442  509591 cri.go:89] found id: "4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:10.766448  509591 cri.go:89] found id: ""
	I0927 18:37:10.766455  509591 logs.go:276] 2 containers: [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd]
	I0927 18:37:10.766543  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.770255  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.774150  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 18:37:10.774269  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 18:37:10.820026  509591 cri.go:89] found id: "a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:10.820053  509591 cri.go:89] found id: "0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:10.820058  509591 cri.go:89] found id: ""
	I0927 18:37:10.820066  509591 logs.go:276] 2 containers: [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055]
	I0927 18:37:10.820154  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.823895  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.827543  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 18:37:10.827628  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 18:37:10.867993  509591 cri.go:89] found id: "6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:10.868018  509591 cri.go:89] found id: "0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:10.868026  509591 cri.go:89] found id: ""
	I0927 18:37:10.868034  509591 logs.go:276] 2 containers: [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d]
	I0927 18:37:10.868115  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.871642  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.875460  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0927 18:37:10.875536  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 18:37:10.912536  509591 cri.go:89] found id: "e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:10.912559  509591 cri.go:89] found id: "86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:10.912565  509591 cri.go:89] found id: ""
	I0927 18:37:10.912572  509591 logs.go:276] 2 containers: [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef]
	I0927 18:37:10.912640  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.916243  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.919651  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 18:37:10.919727  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 18:37:10.960688  509591 cri.go:89] found id: "b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:10.960761  509591 cri.go:89] found id: ""
	I0927 18:37:10.960775  509591 logs.go:276] 1 containers: [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a]
	I0927 18:37:10.960850  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:10.964327  509591 logs.go:123] Gathering logs for etcd [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab] ...
	I0927 18:37:10.964351  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:11.007263  509591 logs.go:123] Gathering logs for coredns [3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c] ...
	I0927 18:37:11.007290  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:11.052851  509591 logs.go:123] Gathering logs for kube-proxy [4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd] ...
	I0927 18:37:11.052882  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:11.093894  509591 logs.go:123] Gathering logs for kube-controller-manager [0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055] ...
	I0927 18:37:11.093924  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:11.155524  509591 logs.go:123] Gathering logs for kindnet [0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d] ...
	I0927 18:37:11.155561  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:11.194499  509591 logs.go:123] Gathering logs for storage-provisioner [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0] ...
	I0927 18:37:11.194532  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:11.233034  509591 logs.go:123] Gathering logs for kubernetes-dashboard [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a] ...
	I0927 18:37:11.233060  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:11.279344  509591 logs.go:123] Gathering logs for describe nodes ...
	I0927 18:37:11.279374  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 18:37:11.439220  509591 logs.go:123] Gathering logs for kube-controller-manager [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b] ...
	I0927 18:37:11.439252  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:11.499790  509591 logs.go:123] Gathering logs for kindnet [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268] ...
	I0927 18:37:11.499831  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:11.545117  509591 logs.go:123] Gathering logs for kube-apiserver [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151] ...
	I0927 18:37:11.545150  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:11.604030  509591 logs.go:123] Gathering logs for kube-apiserver [4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631] ...
	I0927 18:37:11.604064  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:11.663043  509591 logs.go:123] Gathering logs for etcd [67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a] ...
	I0927 18:37:11.663091  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:11.709570  509591 logs.go:123] Gathering logs for kube-scheduler [9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289] ...
	I0927 18:37:11.709645  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:11.751374  509591 logs.go:123] Gathering logs for kube-proxy [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67] ...
	I0927 18:37:11.751409  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:11.789842  509591 logs.go:123] Gathering logs for container status ...
	I0927 18:37:11.789872  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 18:37:11.835768  509591 logs.go:123] Gathering logs for kubelet ...
	I0927 18:37:11.835798  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 18:37:11.888930  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:42 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.963284     657 reflector.go:138] object-"default"/"default-token-ncsvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ncsvq" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889192  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980574     657 reflector.go:138] object-"kube-system"/"kindnet-token-4c6k8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4c6k8" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889420  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980841     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-msgkf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-msgkf" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.889655  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.981364     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hdwrd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hdwrd" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.891900  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043122     657 reflector.go:138] object-"kube-system"/"metrics-server-token-m6vt7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-m6vt7" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.892107  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043201     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.893711  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.071223     657 reflector.go:138] object-"kube-system"/"coredns-token-nlnv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nlnv6" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.893914  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.008976     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:11.902462  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:45 old-k8s-version-313926 kubelet[657]: E0927 18:31:45.917706     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.902652  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:46 old-k8s-version-313926 kubelet[657]: E0927 18:31:46.301139     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.905462  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:59 old-k8s-version-313926 kubelet[657]: E0927 18:31:59.162434     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.907410  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:06 old-k8s-version-313926 kubelet[657]: E0927 18:32:06.395938     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.907874  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:07 old-k8s-version-313926 kubelet[657]: E0927 18:32:07.401770     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.908228  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:09 old-k8s-version-313926 kubelet[657]: E0927 18:32:09.667132     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.908415  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:13 old-k8s-version-313926 kubelet[657]: E0927 18:32:13.135325     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.909187  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:16 old-k8s-version-313926 kubelet[657]: E0927 18:32:16.435963     657 pod_workers.go:191] Error syncing pod 8d21b02b-38af-4ef7-a435-3f3de26186e1 ("storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"
	W0927 18:37:11.910135  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:24 old-k8s-version-313926 kubelet[657]: E0927 18:32:24.532611     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.912589  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:27 old-k8s-version-313926 kubelet[657]: E0927 18:32:27.146345     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.912918  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:29 old-k8s-version-313926 kubelet[657]: E0927 18:32:29.667510     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.913234  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:40 old-k8s-version-313926 kubelet[657]: E0927 18:32:40.135454     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.913833  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:44 old-k8s-version-313926 kubelet[657]: E0927 18:32:44.605072     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914162  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:49 old-k8s-version-313926 kubelet[657]: E0927 18:32:49.667087     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914348  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:55 old-k8s-version-313926 kubelet[657]: E0927 18:32:55.138594     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.914708  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:04 old-k8s-version-313926 kubelet[657]: E0927 18:33:04.135348     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.914896  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:06 old-k8s-version-313926 kubelet[657]: E0927 18:33:06.135559     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.915232  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:15 old-k8s-version-313926 kubelet[657]: E0927 18:33:15.137034     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.917682  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:18 old-k8s-version-313926 kubelet[657]: E0927 18:33:18.150090     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.918277  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:29 old-k8s-version-313926 kubelet[657]: E0927 18:33:29.728038     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.918473  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:33 old-k8s-version-313926 kubelet[657]: E0927 18:33:33.135093     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.918801  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:39 old-k8s-version-313926 kubelet[657]: E0927 18:33:39.667115     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.918986  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:46 old-k8s-version-313926 kubelet[657]: E0927 18:33:46.139080     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.919314  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:54 old-k8s-version-313926 kubelet[657]: E0927 18:33:54.135098     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.919500  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:57 old-k8s-version-313926 kubelet[657]: E0927 18:33:57.135120     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.919692  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:08 old-k8s-version-313926 kubelet[657]: E0927 18:34:08.136761     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.920020  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:09 old-k8s-version-313926 kubelet[657]: E0927 18:34:09.134623     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.920349  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:20 old-k8s-version-313926 kubelet[657]: E0927 18:34:20.139455     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.920534  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:22 old-k8s-version-313926 kubelet[657]: E0927 18:34:22.138347     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.920863  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:32 old-k8s-version-313926 kubelet[657]: E0927 18:34:32.137996     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.921048  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:34 old-k8s-version-313926 kubelet[657]: E0927 18:34:34.136489     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.921384  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:45 old-k8s-version-313926 kubelet[657]: E0927 18:34:45.146030     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.923816  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:46 old-k8s-version-313926 kubelet[657]: E0927 18:34:46.145411     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:11.924407  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:56 old-k8s-version-313926 kubelet[657]: E0927 18:34:56.970275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.924595  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:57 old-k8s-version-313926 kubelet[657]: E0927 18:34:57.135184     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.924924  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:59 old-k8s-version-313926 kubelet[657]: E0927 18:34:59.667001     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.925111  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:11 old-k8s-version-313926 kubelet[657]: E0927 18:35:11.135708     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.925442  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:12 old-k8s-version-313926 kubelet[657]: E0927 18:35:12.134696     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.925661  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:24 old-k8s-version-313926 kubelet[657]: E0927 18:35:24.137383     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.925991  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:25 old-k8s-version-313926 kubelet[657]: E0927 18:35:25.134891     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.926178  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:35 old-k8s-version-313926 kubelet[657]: E0927 18:35:35.135177     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.926517  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:39 old-k8s-version-313926 kubelet[657]: E0927 18:35:39.135338     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.926847  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.136131     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.927034  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.135330     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.927218  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:01 old-k8s-version-313926 kubelet[657]: E0927 18:36:01.135157     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.927548  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:03 old-k8s-version-313926 kubelet[657]: E0927 18:36:03.135275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.927736  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:15 old-k8s-version-313926 kubelet[657]: E0927 18:36:15.145862     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.928062  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: E0927 18:36:17.134586     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.928248  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:26 old-k8s-version-313926 kubelet[657]: E0927 18:36:26.134898     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.928597  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: E0927 18:36:31.135626     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.928787  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.929115  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.929372  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:11.929707  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:11.929898  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:11.929910  509591 logs.go:123] Gathering logs for coredns [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87] ...
	I0927 18:37:11.929928  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:11.967794  509591 logs.go:123] Gathering logs for kube-scheduler [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546] ...
	I0927 18:37:11.967829  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:12.007234  509591 logs.go:123] Gathering logs for storage-provisioner [86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef] ...
	I0927 18:37:12.007265  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:12.058606  509591 logs.go:123] Gathering logs for containerd ...
	I0927 18:37:12.058636  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 18:37:12.125981  509591 logs.go:123] Gathering logs for dmesg ...
	I0927 18:37:12.126034  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 18:37:12.168151  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:12.168176  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 18:37:12.168260  509591 out.go:270] X Problems detected in kubelet:
	W0927 18:37:12.168290  509591 out.go:270]   Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:12.168305  509591 out.go:270]   Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:12.168314  509591 out.go:270]   Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:12.168337  509591 out.go:270]   Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:12.168350  509591 out.go:270]   Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:12.168358  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:12.168371  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:12.676811  519752 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437083
	
	I0927 18:37:12.676837  519752 ubuntu.go:169] provisioning hostname "embed-certs-437083"
	I0927 18:37:12.676947  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:12.694269  519752 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:12.694568  519752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0927 18:37:12.694588  519752 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-437083 && echo "embed-certs-437083" | sudo tee /etc/hostname
	I0927 18:37:12.842598  519752 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-437083
	
	I0927 18:37:12.842721  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:12.860197  519752 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:12.860475  519752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0927 18:37:12.860500  519752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-437083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-437083/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-437083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:37:12.993464  519752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:37:12.993547  519752 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19712-294006/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-294006/.minikube}
	I0927 18:37:12.993581  519752 ubuntu.go:177] setting up certificates
	I0927 18:37:12.993590  519752 provision.go:84] configureAuth start
	I0927 18:37:12.993652  519752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-437083
	I0927 18:37:13.010448  519752 provision.go:143] copyHostCerts
	I0927 18:37:13.010518  519752 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem, removing ...
	I0927 18:37:13.010533  519752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem
	I0927 18:37:13.010615  519752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/ca.pem (1078 bytes)
	I0927 18:37:13.010711  519752 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem, removing ...
	I0927 18:37:13.010722  519752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem
	I0927 18:37:13.010750  519752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/cert.pem (1123 bytes)
	I0927 18:37:13.010807  519752 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem, removing ...
	I0927 18:37:13.010818  519752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem
	I0927 18:37:13.010846  519752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-294006/.minikube/key.pem (1675 bytes)
	I0927 18:37:13.010901  519752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem org=jenkins.embed-certs-437083 san=[127.0.0.1 192.168.85.2 embed-certs-437083 localhost minikube]
	I0927 18:37:13.493111  519752 provision.go:177] copyRemoteCerts
	I0927 18:37:13.493182  519752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:37:13.493233  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:13.511162  519752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa Username:docker}
	I0927 18:37:13.610647  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 18:37:13.636448  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0927 18:37:13.662169  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:37:13.687946  519752 provision.go:87] duration metric: took 694.340185ms to configureAuth
	I0927 18:37:13.688029  519752 ubuntu.go:193] setting minikube options for container-runtime
	I0927 18:37:13.688297  519752 config.go:182] Loaded profile config "embed-certs-437083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 18:37:13.688339  519752 machine.go:96] duration metric: took 4.178953045s to provisionDockerMachine
	I0927 18:37:13.688360  519752 client.go:171] duration metric: took 12.357925416s to LocalClient.Create
	I0927 18:37:13.688415  519752 start.go:167] duration metric: took 12.358029141s to libmachine.API.Create "embed-certs-437083"
	I0927 18:37:13.688445  519752 start.go:293] postStartSetup for "embed-certs-437083" (driver="docker")
	I0927 18:37:13.688472  519752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:37:13.688571  519752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:37:13.688648  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:13.705726  519752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa Username:docker}
	I0927 18:37:13.799108  519752 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:37:13.802655  519752 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 18:37:13.802697  519752 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 18:37:13.802708  519752 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 18:37:13.802721  519752 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 18:37:13.802733  519752 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/addons for local assets ...
	I0927 18:37:13.802801  519752 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-294006/.minikube/files for local assets ...
	I0927 18:37:13.802884  519752 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem -> 2993952.pem in /etc/ssl/certs
	I0927 18:37:13.802997  519752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:37:13.812048  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem --> /etc/ssl/certs/2993952.pem (1708 bytes)
	I0927 18:37:13.837907  519752 start.go:296] duration metric: took 149.430859ms for postStartSetup
	I0927 18:37:13.838278  519752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-437083
	I0927 18:37:13.854736  519752 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/config.json ...
	I0927 18:37:13.855092  519752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:37:13.855147  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:13.871829  519752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa Username:docker}
	I0927 18:37:13.966088  519752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 18:37:13.970609  519752 start.go:128] duration metric: took 12.644907209s to createHost
	I0927 18:37:13.970636  519752 start.go:83] releasing machines lock for "embed-certs-437083", held for 12.645054328s
	I0927 18:37:13.970720  519752 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-437083
	I0927 18:37:13.987313  519752 ssh_runner.go:195] Run: cat /version.json
	I0927 18:37:13.987313  519752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:37:13.987417  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:13.987427  519752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-437083
	I0927 18:37:14.004989  519752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa Username:docker}
	I0927 18:37:14.006328  519752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/embed-certs-437083/id_rsa Username:docker}
	I0927 18:37:14.092970  519752 ssh_runner.go:195] Run: systemctl --version
	I0927 18:37:14.231887  519752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 18:37:14.236983  519752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0927 18:37:14.268441  519752 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0927 18:37:14.268524  519752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:37:14.302298  519752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 18:37:14.302369  519752 start.go:495] detecting cgroup driver to use...
	I0927 18:37:14.302417  519752 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 18:37:14.302481  519752 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0927 18:37:14.316358  519752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0927 18:37:14.328250  519752 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:37:14.328342  519752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:37:14.342728  519752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:37:14.359534  519752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:37:14.456171  519752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:37:14.553476  519752 docker.go:233] disabling docker service ...
	I0927 18:37:14.553602  519752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:37:14.576326  519752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:37:14.590019  519752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:37:14.689088  519752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:37:14.790753  519752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:37:14.803776  519752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:37:14.824216  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 18:37:14.835733  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 18:37:14.847546  519752 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 18:37:14.847662  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 18:37:14.859791  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 18:37:14.870993  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 18:37:14.881740  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 18:37:14.892507  519752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:37:14.902392  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 18:37:14.913128  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 18:37:14.923754  519752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 18:37:14.934596  519752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:37:14.943770  519752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:37:14.952552  519752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:37:15.054497  519752 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0927 18:37:15.210704  519752 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0927 18:37:15.210828  519752 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0927 18:37:15.216972  519752 start.go:563] Will wait 60s for crictl version
	I0927 18:37:15.217093  519752 ssh_runner.go:195] Run: which crictl
	I0927 18:37:15.221728  519752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:37:15.260441  519752 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0927 18:37:15.260568  519752 ssh_runner.go:195] Run: containerd --version
	I0927 18:37:15.282462  519752 ssh_runner.go:195] Run: containerd --version
	I0927 18:37:15.311179  519752 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0927 18:37:15.312816  519752 cli_runner.go:164] Run: docker network inspect embed-certs-437083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 18:37:15.331538  519752 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0927 18:37:15.335303  519752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:37:15.346540  519752 kubeadm.go:883] updating cluster {Name:embed-certs-437083 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-437083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:37:15.346668  519752 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 18:37:15.346735  519752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:37:15.399448  519752 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 18:37:15.399478  519752 containerd.go:534] Images already preloaded, skipping extraction
	I0927 18:37:15.399551  519752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:37:15.439990  519752 containerd.go:627] all images are preloaded for containerd runtime.
	I0927 18:37:15.440016  519752 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:37:15.440025  519752 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0927 18:37:15.440131  519752 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-437083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-437083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:37:15.440202  519752 ssh_runner.go:195] Run: sudo crictl info
	I0927 18:37:15.477620  519752 cni.go:84] Creating CNI manager for ""
	I0927 18:37:15.477645  519752 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 18:37:15.477656  519752 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:37:15.477681  519752 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-437083 NodeName:embed-certs-437083 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:37:15.477810  519752 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-437083"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:37:15.477893  519752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:37:15.487335  519752 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:37:15.487434  519752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:37:15.496322  519752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0927 18:37:15.515518  519752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:37:15.534296  519752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0927 18:37:15.553752  519752 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0927 18:37:15.557889  519752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:37:15.568849  519752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:37:15.651722  519752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:37:15.668504  519752 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083 for IP: 192.168.85.2
	I0927 18:37:15.668528  519752 certs.go:194] generating shared ca certs ...
	I0927 18:37:15.668545  519752 certs.go:226] acquiring lock for ca certs: {Name:mk0891ce7588143d48f2c5fb538d185b80c1ae26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:15.668685  519752 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key
	I0927 18:37:15.668736  519752 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key
	I0927 18:37:15.668747  519752 certs.go:256] generating profile certs ...
	I0927 18:37:15.668802  519752 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.key
	I0927 18:37:15.668820  519752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.crt with IP's: []
	I0927 18:37:16.389199  519752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.crt ...
	I0927 18:37:16.389236  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.crt: {Name:mkc4bcb2657a9564309cd188edb03227cb688f68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:16.390214  519752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.key ...
	I0927 18:37:16.390234  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/client.key: {Name:mk12eda4ebfb0ab777688a08f280d4abf69b5af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:16.390984  519752 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key.54fac8f5
	I0927 18:37:16.391019  519752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt.54fac8f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0927 18:37:17.008571  519752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt.54fac8f5 ...
	I0927 18:37:17.008609  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt.54fac8f5: {Name:mk5661ba1bcd47dd89a5523b557b54b93728e3ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:17.008834  519752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key.54fac8f5 ...
	I0927 18:37:17.008852  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key.54fac8f5: {Name:mkb70e6f01fe9a0e2d52e9a8478c109f0013015f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:17.008975  519752 certs.go:381] copying /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt.54fac8f5 -> /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt
	I0927 18:37:17.009065  519752 certs.go:385] copying /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key.54fac8f5 -> /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key
	I0927 18:37:17.009126  519752 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.key
	I0927 18:37:17.009145  519752 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.crt with IP's: []
	I0927 18:37:17.561952  519752 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.crt ...
	I0927 18:37:17.561985  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.crt: {Name:mk6dd7917efc97d7a93c4cd547c7b239a90aab69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:17.562903  519752 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.key ...
	I0927 18:37:17.562921  519752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.key: {Name:mk46b33ffa5df04dbc200e79463845f1cd57c53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:17.563738  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395.pem (1338 bytes)
	W0927 18:37:17.563789  519752 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395_empty.pem, impossibly tiny 0 bytes
	I0927 18:37:17.563805  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:37:17.563834  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/ca.pem (1078 bytes)
	I0927 18:37:17.563863  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:37:17.563892  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/certs/key.pem (1675 bytes)
	I0927 18:37:17.563938  519752 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem (1708 bytes)
	I0927 18:37:17.564552  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:37:17.591400  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 18:37:17.617885  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:37:17.642849  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0927 18:37:17.667223  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 18:37:17.692170  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 18:37:17.717130  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:37:17.743116  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/embed-certs-437083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 18:37:17.769081  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:37:17.796316  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/certs/299395.pem --> /usr/share/ca-certificates/299395.pem (1338 bytes)
	I0927 18:37:17.822452  519752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/ssl/certs/2993952.pem --> /usr/share/ca-certificates/2993952.pem (1708 bytes)
	I0927 18:37:17.846661  519752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:37:17.865998  519752 ssh_runner.go:195] Run: openssl version
	I0927 18:37:17.871556  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:37:17.881661  519752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:17.885511  519752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 17:40 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:17.885582  519752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:17.892884  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:37:17.902920  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/299395.pem && ln -fs /usr/share/ca-certificates/299395.pem /etc/ssl/certs/299395.pem"
	I0927 18:37:17.912538  519752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299395.pem
	I0927 18:37:17.916220  519752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:50 /usr/share/ca-certificates/299395.pem
	I0927 18:37:17.916287  519752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299395.pem
	I0927 18:37:17.923452  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/299395.pem /etc/ssl/certs/51391683.0"
	I0927 18:37:17.933012  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2993952.pem && ln -fs /usr/share/ca-certificates/2993952.pem /etc/ssl/certs/2993952.pem"
	I0927 18:37:17.942295  519752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2993952.pem
	I0927 18:37:17.945934  519752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:50 /usr/share/ca-certificates/2993952.pem
	I0927 18:37:17.946014  519752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2993952.pem
	I0927 18:37:17.953010  519752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2993952.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:37:17.963222  519752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:37:17.966733  519752 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 18:37:17.966789  519752 kubeadm.go:392] StartCluster: {Name:embed-certs-437083 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-437083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:17.966865  519752 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0927 18:37:17.966924  519752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:37:18.004350  519752 cri.go:89] found id: ""
	I0927 18:37:18.004424  519752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:37:18.016206  519752 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 18:37:18.033954  519752 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 18:37:18.034046  519752 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:37:18.046578  519752 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:37:18.046606  519752 kubeadm.go:157] found existing configuration files:
	
	I0927 18:37:18.046671  519752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:37:18.058839  519752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:37:18.058943  519752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:37:18.069091  519752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:37:18.079858  519752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:37:18.080007  519752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:37:18.090733  519752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:37:18.101480  519752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:37:18.101644  519752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:37:18.112712  519752 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:37:18.122746  519752 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:37:18.122828  519752 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:37:18.141851  519752 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 18:37:18.189619  519752 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 18:37:18.189993  519752 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 18:37:18.210828  519752 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 18:37:18.210914  519752 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 18:37:18.210958  519752 kubeadm.go:310] OS: Linux
	I0927 18:37:18.211012  519752 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 18:37:18.211067  519752 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 18:37:18.211122  519752 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 18:37:18.211177  519752 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 18:37:18.211233  519752 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 18:37:18.211293  519752 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 18:37:18.211349  519752 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 18:37:18.211405  519752 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 18:37:18.211458  519752 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 18:37:18.278048  519752 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 18:37:18.278199  519752 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 18:37:18.278306  519752 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 18:37:18.284387  519752 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 18:37:18.288563  519752 out.go:235]   - Generating certificates and keys ...
	I0927 18:37:18.288682  519752 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 18:37:18.288768  519752 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 18:37:19.347169  519752 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 18:37:20.041066  519752 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 18:37:20.615368  519752 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 18:37:22.169134  509591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:37:22.183163  509591 api_server.go:72] duration metric: took 5m56.052597501s to wait for apiserver process to appear ...
	I0927 18:37:22.183185  509591 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:37:22.183221  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0927 18:37:22.183283  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 18:37:22.243147  509591 cri.go:89] found id: "85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:22.243171  509591 cri.go:89] found id: "4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:22.243177  509591 cri.go:89] found id: ""
	I0927 18:37:22.243196  509591 logs.go:276] 2 containers: [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631]
	I0927 18:37:22.243254  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.248620  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.253646  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0927 18:37:22.253731  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 18:37:22.320532  509591 cri.go:89] found id: "e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:22.320616  509591 cri.go:89] found id: "67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:22.320642  509591 cri.go:89] found id: ""
	I0927 18:37:22.320681  509591 logs.go:276] 2 containers: [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a]
	I0927 18:37:22.320769  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.325468  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.330520  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0927 18:37:22.330601  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 18:37:22.385427  509591 cri.go:89] found id: "ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:22.385456  509591 cri.go:89] found id: "3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:22.385466  509591 cri.go:89] found id: ""
	I0927 18:37:22.385474  509591 logs.go:276] 2 containers: [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c]
	I0927 18:37:22.385533  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.390761  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.395352  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0927 18:37:22.395507  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 18:37:22.450433  509591 cri.go:89] found id: "51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:22.450459  509591 cri.go:89] found id: "9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:22.450464  509591 cri.go:89] found id: ""
	I0927 18:37:22.450505  509591 logs.go:276] 2 containers: [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289]
	I0927 18:37:22.450589  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.455198  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.459233  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0927 18:37:22.459360  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 18:37:22.548138  509591 cri.go:89] found id: "fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:22.548177  509591 cri.go:89] found id: "4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:22.548183  509591 cri.go:89] found id: ""
	I0927 18:37:22.548191  509591 logs.go:276] 2 containers: [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd]
	I0927 18:37:22.548280  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.552629  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.557112  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 18:37:22.557228  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 18:37:22.608500  509591 cri.go:89] found id: "a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:22.608525  509591 cri.go:89] found id: "0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:22.608541  509591 cri.go:89] found id: ""
	I0927 18:37:22.608619  509591 logs.go:276] 2 containers: [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055]
	I0927 18:37:22.608701  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.613019  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.618228  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0927 18:37:22.618331  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 18:37:22.667157  509591 cri.go:89] found id: "6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:22.667180  509591 cri.go:89] found id: "0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:22.667197  509591 cri.go:89] found id: ""
	I0927 18:37:22.667219  509591 logs.go:276] 2 containers: [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d]
	I0927 18:37:22.667300  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.671396  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.675112  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 18:37:22.675237  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 18:37:22.724970  509591 cri.go:89] found id: "b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:22.724996  509591 cri.go:89] found id: ""
	I0927 18:37:22.725005  509591 logs.go:276] 1 containers: [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a]
	I0927 18:37:22.725087  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.729152  509591 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0927 18:37:22.729284  509591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 18:37:22.778371  509591 cri.go:89] found id: "e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:22.778404  509591 cri.go:89] found id: "86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:22.778410  509591 cri.go:89] found id: ""
	I0927 18:37:22.778417  509591 logs.go:276] 2 containers: [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef]
	I0927 18:37:22.778522  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.782527  509591 ssh_runner.go:195] Run: which crictl
	I0927 18:37:22.786576  509591 logs.go:123] Gathering logs for kube-apiserver [4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631] ...
	I0927 18:37:22.786629  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631"
	I0927 18:37:22.860905  509591 logs.go:123] Gathering logs for etcd [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab] ...
	I0927 18:37:22.860942  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab"
	I0927 18:37:22.920203  509591 logs.go:123] Gathering logs for etcd [67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a] ...
	I0927 18:37:22.920235  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a"
	I0927 18:37:22.990981  509591 logs.go:123] Gathering logs for kube-scheduler [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546] ...
	I0927 18:37:22.991028  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546"
	I0927 18:37:23.049744  509591 logs.go:123] Gathering logs for storage-provisioner [86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef] ...
	I0927 18:37:23.049775  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef"
	I0927 18:37:23.102110  509591 logs.go:123] Gathering logs for describe nodes ...
	I0927 18:37:23.102140  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 18:37:23.305600  509591 logs.go:123] Gathering logs for coredns [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87] ...
	I0927 18:37:23.305636  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87"
	I0927 18:37:23.360521  509591 logs.go:123] Gathering logs for kube-scheduler [9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289] ...
	I0927 18:37:23.360556  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289"
	I0927 18:37:21.482531  519752 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 18:37:21.944614  519752 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 18:37:21.944943  519752 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-437083 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0927 18:37:22.326054  519752 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 18:37:22.326548  519752 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-437083 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0927 18:37:23.154392  519752 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 18:37:23.525226  519752 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 18:37:23.885175  519752 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 18:37:23.885609  519752 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 18:37:24.613136  519752 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 18:37:24.788636  519752 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 18:37:24.950153  519752 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 18:37:25.421789  519752 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 18:37:25.807450  519752 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 18:37:25.808430  519752 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 18:37:25.811772  519752 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 18:37:25.815301  519752 out.go:235]   - Booting up control plane ...
	I0927 18:37:25.815432  519752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 18:37:25.815523  519752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 18:37:25.815999  519752 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 18:37:25.837049  519752 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 18:37:25.846712  519752 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 18:37:25.846769  519752 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 18:37:25.961698  519752 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 18:37:25.961818  519752 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 18:37:23.421593  509591 logs.go:123] Gathering logs for kube-proxy [4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd] ...
	I0927 18:37:23.421635  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd"
	I0927 18:37:23.479620  509591 logs.go:123] Gathering logs for kube-controller-manager [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b] ...
	I0927 18:37:23.479649  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b"
	I0927 18:37:23.555578  509591 logs.go:123] Gathering logs for storage-provisioner [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0] ...
	I0927 18:37:23.555616  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0"
	I0927 18:37:23.604573  509591 logs.go:123] Gathering logs for containerd ...
	I0927 18:37:23.604603  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0927 18:37:23.672862  509591 logs.go:123] Gathering logs for kubelet ...
	I0927 18:37:23.672900  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 18:37:23.727037  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:42 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.963284     657 reflector.go:138] object-"default"/"default-token-ncsvq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ncsvq" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727304  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980574     657 reflector.go:138] object-"kube-system"/"kindnet-token-4c6k8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-4c6k8" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727547  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.980841     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-msgkf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-msgkf" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.727797  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:42.981364     657 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hdwrd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hdwrd" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.730468  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043122     657 reflector.go:138] object-"kube-system"/"metrics-server-token-m6vt7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-m6vt7" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.730716  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.043201     657 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.732343  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.071223     657 reflector.go:138] object-"kube-system"/"coredns-token-nlnv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-nlnv6" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.732566  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:43 old-k8s-version-313926 kubelet[657]: E0927 18:31:43.008976     657 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-313926" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-313926' and this object
	W0927 18:37:23.741449  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:45 old-k8s-version-313926 kubelet[657]: E0927 18:31:45.917706     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.741694  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:46 old-k8s-version-313926 kubelet[657]: E0927 18:31:46.301139     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.744588  509591 logs.go:138] Found kubelet problem: Sep 27 18:31:59 old-k8s-version-313926 kubelet[657]: E0927 18:31:59.162434     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.746591  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:06 old-k8s-version-313926 kubelet[657]: E0927 18:32:06.395938     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747093  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:07 old-k8s-version-313926 kubelet[657]: E0927 18:32:07.401770     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747485  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:09 old-k8s-version-313926 kubelet[657]: E0927 18:32:09.667132     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.747699  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:13 old-k8s-version-313926 kubelet[657]: E0927 18:32:13.135325     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.748487  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:16 old-k8s-version-313926 kubelet[657]: E0927 18:32:16.435963     657 pod_workers.go:191] Error syncing pod 8d21b02b-38af-4ef7-a435-3f3de26186e1 ("storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8d21b02b-38af-4ef7-a435-3f3de26186e1)"
	W0927 18:37:23.749437  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:24 old-k8s-version-313926 kubelet[657]: E0927 18:32:24.532611     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.752028  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:27 old-k8s-version-313926 kubelet[657]: E0927 18:32:27.146345     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.752431  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:29 old-k8s-version-313926 kubelet[657]: E0927 18:32:29.667510     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.752776  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:40 old-k8s-version-313926 kubelet[657]: E0927 18:32:40.135454     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.753407  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:44 old-k8s-version-313926 kubelet[657]: E0927 18:32:44.605072     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.753759  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:49 old-k8s-version-313926 kubelet[657]: E0927 18:32:49.667087     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.753968  509591 logs.go:138] Found kubelet problem: Sep 27 18:32:55 old-k8s-version-313926 kubelet[657]: E0927 18:32:55.138594     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.754321  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:04 old-k8s-version-313926 kubelet[657]: E0927 18:33:04.135348     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.754529  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:06 old-k8s-version-313926 kubelet[657]: E0927 18:33:06.135559     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.754878  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:15 old-k8s-version-313926 kubelet[657]: E0927 18:33:15.137034     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.757356  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:18 old-k8s-version-313926 kubelet[657]: E0927 18:33:18.150090     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.758049  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:29 old-k8s-version-313926 kubelet[657]: E0927 18:33:29.728038     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.758334  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:33 old-k8s-version-313926 kubelet[657]: E0927 18:33:33.135093     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.758692  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:39 old-k8s-version-313926 kubelet[657]: E0927 18:33:39.667115     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.758905  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:46 old-k8s-version-313926 kubelet[657]: E0927 18:33:46.139080     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.759328  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:54 old-k8s-version-313926 kubelet[657]: E0927 18:33:54.135098     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.759538  509591 logs.go:138] Found kubelet problem: Sep 27 18:33:57 old-k8s-version-313926 kubelet[657]: E0927 18:33:57.135120     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.759746  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:08 old-k8s-version-313926 kubelet[657]: E0927 18:34:08.136761     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.760112  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:09 old-k8s-version-313926 kubelet[657]: E0927 18:34:09.134623     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.760470  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:20 old-k8s-version-313926 kubelet[657]: E0927 18:34:20.139455     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.760674  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:22 old-k8s-version-313926 kubelet[657]: E0927 18:34:22.138347     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.761022  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:32 old-k8s-version-313926 kubelet[657]: E0927 18:34:32.137996     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.761244  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:34 old-k8s-version-313926 kubelet[657]: E0927 18:34:34.136489     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.761601  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:45 old-k8s-version-313926 kubelet[657]: E0927 18:34:45.146030     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.764181  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:46 old-k8s-version-313926 kubelet[657]: E0927 18:34:46.145411     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0927 18:37:23.764797  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:56 old-k8s-version-313926 kubelet[657]: E0927 18:34:56.970275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.765007  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:57 old-k8s-version-313926 kubelet[657]: E0927 18:34:57.135184     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.765368  509591 logs.go:138] Found kubelet problem: Sep 27 18:34:59 old-k8s-version-313926 kubelet[657]: E0927 18:34:59.667001     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.765575  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:11 old-k8s-version-313926 kubelet[657]: E0927 18:35:11.135708     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.765923  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:12 old-k8s-version-313926 kubelet[657]: E0927 18:35:12.134696     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.766131  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:24 old-k8s-version-313926 kubelet[657]: E0927 18:35:24.137383     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.766482  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:25 old-k8s-version-313926 kubelet[657]: E0927 18:35:25.134891     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.766693  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:35 old-k8s-version-313926 kubelet[657]: E0927 18:35:35.135177     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.767040  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:39 old-k8s-version-313926 kubelet[657]: E0927 18:35:39.135338     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.767389  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.136131     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.767595  509591 logs.go:138] Found kubelet problem: Sep 27 18:35:50 old-k8s-version-313926 kubelet[657]: E0927 18:35:50.135330     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.767803  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:01 old-k8s-version-313926 kubelet[657]: E0927 18:36:01.135157     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.768152  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:03 old-k8s-version-313926 kubelet[657]: E0927 18:36:03.135275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.768369  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:15 old-k8s-version-313926 kubelet[657]: E0927 18:36:15.145862     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.768745  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: E0927 18:36:17.134586     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.768953  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:26 old-k8s-version-313926 kubelet[657]: E0927 18:36:26.134898     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.769350  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: E0927 18:36:31.135626     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.769557  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.769926  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.770150  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.770548  509591 logs.go:138] Found kubelet problem: Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.770740  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:23.771067  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:23.771252  509591 logs.go:138] Found kubelet problem: Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:23.771262  509591 logs.go:123] Gathering logs for coredns [3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c] ...
	I0927 18:37:23.771279  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c"
	I0927 18:37:23.819339  509591 logs.go:123] Gathering logs for kube-controller-manager [0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055] ...
	I0927 18:37:23.819370  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055"
	I0927 18:37:23.873786  509591 logs.go:123] Gathering logs for kindnet [0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d] ...
	I0927 18:37:23.873824  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d"
	I0927 18:37:23.923360  509591 logs.go:123] Gathering logs for dmesg ...
	I0927 18:37:23.923393  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 18:37:23.939919  509591 logs.go:123] Gathering logs for kube-apiserver [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151] ...
	I0927 18:37:23.939949  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151"
	I0927 18:37:24.005877  509591 logs.go:123] Gathering logs for kube-proxy [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67] ...
	I0927 18:37:24.005914  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67"
	I0927 18:37:24.067764  509591 logs.go:123] Gathering logs for kindnet [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268] ...
	I0927 18:37:24.067873  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268"
	I0927 18:37:24.163533  509591 logs.go:123] Gathering logs for kubernetes-dashboard [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a] ...
	I0927 18:37:24.163611  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a"
	I0927 18:37:24.237049  509591 logs.go:123] Gathering logs for container status ...
	I0927 18:37:24.237130  509591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 18:37:24.338852  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:24.338929  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 18:37:24.339073  509591 out.go:270] X Problems detected in kubelet:
	W0927 18:37:24.339236  509591 out.go:270]   Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:24.339280  509591 out.go:270]   Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:24.339390  509591 out.go:270]   Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 18:37:24.339426  509591 out.go:270]   Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	W0927 18:37:24.339488  509591 out.go:270]   Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0927 18:37:24.339524  509591 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:24.339560  509591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:27.456512  519752 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502088966s
	I0927 18:37:27.456608  519752 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 18:37:33.458795  519752 kubeadm.go:310] [api-check] The API server is healthy after 6.002351771s
	I0927 18:37:33.480809  519752 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 18:37:33.495508  519752 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 18:37:33.520778  519752 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 18:37:33.520986  519752 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-437083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 18:37:33.532910  519752 kubeadm.go:310] [bootstrap-token] Using token: 28etal.cp32b6buv2k5vpki
	I0927 18:37:34.340625  509591 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0927 18:37:34.350621  509591 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0927 18:37:34.352786  509591 out.go:201] 
	W0927 18:37:34.354551  509591 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0927 18:37:34.354706  509591 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0927 18:37:34.354736  509591 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0927 18:37:34.354747  509591 out.go:270] * 
	W0927 18:37:34.356466  509591 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 18:37:34.358492  509591 out.go:201] 
	I0927 18:37:33.534912  519752 out.go:235]   - Configuring RBAC rules ...
	I0927 18:37:33.535041  519752 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 18:37:33.542560  519752 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 18:37:33.552114  519752 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 18:37:33.557134  519752 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 18:37:33.562445  519752 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 18:37:33.566713  519752 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 18:37:33.868641  519752 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 18:37:34.295886  519752 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 18:37:34.868798  519752 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 18:37:34.870040  519752 kubeadm.go:310] 
	I0927 18:37:34.870118  519752 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 18:37:34.870128  519752 kubeadm.go:310] 
	I0927 18:37:34.870205  519752 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 18:37:34.870215  519752 kubeadm.go:310] 
	I0927 18:37:34.870240  519752 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 18:37:34.870303  519752 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 18:37:34.870357  519752 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 18:37:34.870366  519752 kubeadm.go:310] 
	I0927 18:37:34.870429  519752 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 18:37:34.870440  519752 kubeadm.go:310] 
	I0927 18:37:34.870488  519752 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 18:37:34.870497  519752 kubeadm.go:310] 
	I0927 18:37:34.870549  519752 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 18:37:34.870627  519752 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 18:37:34.870698  519752 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 18:37:34.870706  519752 kubeadm.go:310] 
	I0927 18:37:34.870790  519752 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 18:37:34.870870  519752 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 18:37:34.870879  519752 kubeadm.go:310] 
	I0927 18:37:34.870962  519752 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 28etal.cp32b6buv2k5vpki \
	I0927 18:37:34.871069  519752 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e36d06a61fed1cb797521692277c6fed05d87d948beae49341a57851e31b2de5 \
	I0927 18:37:34.871093  519752 kubeadm.go:310] 	--control-plane 
	I0927 18:37:34.871107  519752 kubeadm.go:310] 
	I0927 18:37:34.871195  519752 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 18:37:34.871204  519752 kubeadm.go:310] 
	I0927 18:37:34.871285  519752 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 28etal.cp32b6buv2k5vpki \
	I0927 18:37:34.871389  519752 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e36d06a61fed1cb797521692277c6fed05d87d948beae49341a57851e31b2de5 
	I0927 18:37:34.875571  519752 kubeadm.go:310] W0927 18:37:18.186614    1059 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 18:37:34.875927  519752 kubeadm.go:310] W0927 18:37:18.187341    1059 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 18:37:34.876147  519752 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 18:37:34.876276  519752 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 18:37:34.876304  519752 cni.go:84] Creating CNI manager for ""
	I0927 18:37:34.876313  519752 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 18:37:34.880132  519752 out.go:177] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	0c8756f9ba318       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   3fe4788bf7bd1       dashboard-metrics-scraper-8d5bb5db8-gm9vd
	e99ef95a69b41       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   0ea7901fe75fb       storage-provisioner
	b502a59de9a32       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   c2d34c7144919       kubernetes-dashboard-cd95d586-jmh4f
	ecb027eadfac4       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   06a972142f331       coredns-74ff55c5b-btnnf
	86ac708203303       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   0ea7901fe75fb       storage-provisioner
	fd7c8b16aaa16       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   10b17d06c4ed9       kube-proxy-gccpt
	838a50fcaddcd       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   da8791e5d47ed       busybox
	6ad5208920899       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   4a8ab2e343217       kindnet-7l2lp
	a007a778324de       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   1cbee7cd0b9d0       kube-controller-manager-old-k8s-version-313926
	e71e696120b97       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   74dfa365c1634       etcd-old-k8s-version-313926
	85ecd46044408       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   983c6668a1430       kube-apiserver-old-k8s-version-313926
	51d0717d23f66       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   6636feb8d1b9b       kube-scheduler-old-k8s-version-313926
	9c8d10735b9d0       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   3cacd9c2df7dc       busybox
	3fd3702ef77c3       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   396da40294326       coredns-74ff55c5b-btnnf
	0b88fa57b5f72       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   a2c4dfaac6f06       kindnet-7l2lp
	4671e4d059054       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   65d9ecd16f3ca       kube-proxy-gccpt
	9d696b48541d8       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   78da7a43e7819       kube-scheduler-old-k8s-version-313926
	0ca3bb4254dee       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   b7c913ca548f5       kube-controller-manager-old-k8s-version-313926
	67828cb17957b       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   215fb61b5e647       etcd-old-k8s-version-313926
	4ae2386761a34       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   3493716649509       kube-apiserver-old-k8s-version-313926
	
	
	==> containerd <==
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.160843231Z" level=info msg="CreateContainer within sandbox \"3fe4788bf7bd1e98d65c8bf3167bd38bb203c05f5c04ee9481525e1f1c087214\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc\""
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.161386724Z" level=info msg="StartContainer for \"5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc\""
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.229569622Z" level=info msg="StartContainer for \"5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc\" returns successfully"
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.287820362Z" level=info msg="shim disconnected" id=5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc namespace=k8s.io
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.287878905Z" level=warning msg="cleaning up after shim disconnected" id=5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc namespace=k8s.io
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.287889515Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.729616547Z" level=info msg="RemoveContainer for \"eeea7031734f68512e3e3a0a4f10827e0b7b25fb8fa0bf0c10d4f34e948fa3ba\""
	Sep 27 18:33:29 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:33:29.737231498Z" level=info msg="RemoveContainer for \"eeea7031734f68512e3e3a0a4f10827e0b7b25fb8fa0bf0c10d4f34e948fa3ba\" returns successfully"
	Sep 27 18:34:46 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:46.136973939Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:34:46 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:46.143261884Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 27 18:34:46 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:46.144866077Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 27 18:34:46 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:46.144961177Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.137773535Z" level=info msg="CreateContainer within sandbox \"3fe4788bf7bd1e98d65c8bf3167bd38bb203c05f5c04ee9481525e1f1c087214\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.153368298Z" level=info msg="CreateContainer within sandbox \"3fe4788bf7bd1e98d65c8bf3167bd38bb203c05f5c04ee9481525e1f1c087214\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66\""
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.154279657Z" level=info msg="StartContainer for \"0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66\""
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.218980110Z" level=info msg="StartContainer for \"0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66\" returns successfully"
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.249797375Z" level=info msg="shim disconnected" id=0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66 namespace=k8s.io
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.249865925Z" level=warning msg="cleaning up after shim disconnected" id=0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66 namespace=k8s.io
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.249878125Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.971901976Z" level=info msg="RemoveContainer for \"5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc\""
	Sep 27 18:34:56 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:34:56.979050147Z" level=info msg="RemoveContainer for \"5944b299c7fec104631d351fab4229d97f1ae7f3381ed0a8040cb301b2dcb3cc\" returns successfully"
	Sep 27 18:37:30 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:37:30.141687717Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:37:30 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:37:30.159813723Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 27 18:37:30 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:37:30.161869620Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 27 18:37:30 old-k8s-version-313926 containerd[567]: time="2024-09-27T18:37:30.162330868Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3fd3702ef77c385b3d2e6dfd2ac0f53a5a7799f903dae469545853c2c86e071c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:48762 - 35827 "HINFO IN 731776961551148248.829989729746408064. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.044802854s
	
	
	==> coredns [ecb027eadfac41b65c14465ec2e732bdb948868fa4dd15b58a5a3de52b61ea87] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:60043 - 40546 "HINFO IN 988597629537507514.117113749347180460. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012155417s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-313926
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-313926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=old-k8s-version-313926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_28_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:28:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-313926
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:37:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:32:43 +0000   Fri, 27 Sep 2024 18:28:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:32:43 +0000   Fri, 27 Sep 2024 18:28:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:32:43 +0000   Fri, 27 Sep 2024 18:28:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:32:43 +0000   Fri, 27 Sep 2024 18:29:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-313926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f80c8c8be88a4ce2938071fddc27942d
	  System UUID:                0c11e5a5-7c4a-4448-8263-793be94a25b9
	  Boot ID:                    7a34a0f0-976f-42af-914d-3a2d2373d850
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 coredns-74ff55c5b-btnnf                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m32s
	  kube-system                 etcd-old-k8s-version-313926                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m39s
	  kube-system                 kindnet-7l2lp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m32s
	  kube-system                 kube-apiserver-old-k8s-version-313926             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-controller-manager-old-k8s-version-313926    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-gccpt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-scheduler-old-k8s-version-313926             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 metrics-server-9975d5f86-cft95                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m30s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-gm9vd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-jmh4f               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s (x5 over 8m58s)  kubelet     Node old-k8s-version-313926 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s (x4 over 8m58s)  kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet     Node old-k8s-version-313926 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m32s                  kubelet     Node old-k8s-version-313926 status is now: NodeReady
	  Normal  Starting                 8m30s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m2s)    kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m2s)    kubelet     Node old-k8s-version-313926 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m2s)    kubelet     Node old-k8s-version-313926 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m2s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m51s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep27 17:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [67828cb17957ba1f0aae18f3a50bea2a2743a3fe88accd1c7340095d4ba3431a] <==
	raft2024/09/27 18:28:39 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/27 18:28:39 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/27 18:28:39 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-27 18:28:39.488848 I | etcdserver: published {Name:old-k8s-version-313926 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-27 18:28:39.489010 I | embed: ready to serve client requests
	2024-09-27 18:28:39.490981 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-27 18:28:39.491173 I | embed: ready to serve client requests
	2024-09-27 18:28:39.496466 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-27 18:28:39.553905 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-27 18:28:39.557317 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-27 18:28:39.563120 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-27 18:29:02.012608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:02.494521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:12.494604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:22.494588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:32.494625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:42.494719 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:29:52.494689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:02.494705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:12.494716 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:22.494663 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:32.494623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:42.494869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:30:52.494698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:31:02.494901 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [e71e696120b9769d7abda2273f8c6fc5dade3817841e06e7c0c2ac2a2f7e52ab] <==
	2024-09-27 18:33:34.264457 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:33:44.264656 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:33:54.264666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:04.264547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:14.264604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:24.264539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:34.264545 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:44.264686 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:34:54.264589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:04.264435 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:14.264398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:24.264683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:34.264759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:44.264701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:35:54.264422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:04.264579 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:14.264518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:24.264567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:34.264551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:44.264772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:36:54.264610 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:37:04.264539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:37:14.264642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:37:24.268220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 18:37:34.264594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:37:36 up  2:20,  0 users,  load average: 1.97, 2.09, 2.42
	Linux old-k8s-version-313926 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [0b88fa57b5f7233f70e4decfcc9ad22dddd56645d98ec12b1a78653692fd426d] <==
	I0927 18:29:09.018208       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0927 18:29:09.018694       1 metrics.go:61] Registering metrics
	I0927 18:29:09.018839       1 controller.go:374] Syncing nftables rules
	I0927 18:29:18.842159       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:29:18.842223       1 main.go:299] handling current node
	I0927 18:29:28.842924       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:29:28.842958       1 main.go:299] handling current node
	I0927 18:29:38.846235       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:29:38.846270       1 main.go:299] handling current node
	I0927 18:29:48.851059       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:29:48.851092       1 main.go:299] handling current node
	I0927 18:29:58.850903       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:29:58.850934       1 main.go:299] handling current node
	I0927 18:30:08.842931       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:08.842975       1 main.go:299] handling current node
	I0927 18:30:18.842431       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:18.842467       1 main.go:299] handling current node
	I0927 18:30:28.851372       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:28.851405       1 main.go:299] handling current node
	I0927 18:30:38.849962       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:38.849999       1 main.go:299] handling current node
	I0927 18:30:48.842611       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:48.842673       1 main.go:299] handling current node
	I0927 18:30:58.842366       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:30:58.842407       1 main.go:299] handling current node
	
	
	==> kindnet [6ad520892089972f1369cf016d06c504c2e8914beeda19325797fe93c9ab3268] <==
	I0927 18:35:35.837367       1 main.go:299] handling current node
	I0927 18:35:45.830353       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:35:45.830452       1 main.go:299] handling current node
	I0927 18:35:55.837220       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:35:55.837291       1 main.go:299] handling current node
	I0927 18:36:05.837333       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:05.837367       1 main.go:299] handling current node
	I0927 18:36:15.838429       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:15.838469       1 main.go:299] handling current node
	I0927 18:36:25.837640       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:25.837676       1 main.go:299] handling current node
	I0927 18:36:35.838534       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:35.838572       1 main.go:299] handling current node
	I0927 18:36:45.830587       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:45.830630       1 main.go:299] handling current node
	I0927 18:36:55.837675       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:36:55.837812       1 main.go:299] handling current node
	I0927 18:37:05.837459       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:37:05.837499       1 main.go:299] handling current node
	I0927 18:37:15.837303       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:37:15.837344       1 main.go:299] handling current node
	I0927 18:37:25.837723       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:37:25.837817       1 main.go:299] handling current node
	I0927 18:37:35.838270       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 18:37:35.838303       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4ae2386761a348c28680c21b0d9b5594f5be2feb4a4a2b2ae98bea7196d41631] <==
	I0927 18:28:46.736930       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0927 18:28:46.787638       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0927 18:28:46.793662       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0927 18:28:46.793867       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0927 18:28:47.251543       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:28:47.293284       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0927 18:28:47.364151       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0927 18:28:47.365695       1 controller.go:606] quota admission added evaluator for: endpoints
	I0927 18:28:47.372550       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 18:28:48.357581       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0927 18:28:49.200964       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0927 18:28:49.315413       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0927 18:28:57.624911       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:29:04.781164       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0927 18:29:04.816244       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0927 18:29:13.844718       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:29:13.844766       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:29:13.844848       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 18:29:50.922524       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:29:50.922567       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:29:50.922713       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 18:30:30.468335       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:30:30.468413       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:30:30.468423       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0927 18:31:05.666158       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-apiserver [85ecd46044408d2857297ebcad2d6b44f801825745ef2a1970c3d3f0a8916151] <==
	I0927 18:34:11.450804       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:34:11.450812       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0927 18:34:45.591542       1 handler_proxy.go:102] no RequestInfo found in the context
	E0927 18:34:45.591616       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0927 18:34:45.591624       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 18:34:51.302005       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:34:51.302224       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:34:51.302241       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 18:35:26.534633       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:35:26.534686       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:35:26.534694       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 18:36:11.014663       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:36:11.014703       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:36:11.014860       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0927 18:36:44.001865       1 handler_proxy.go:102] no RequestInfo found in the context
	E0927 18:36:44.001964       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0927 18:36:44.001980       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 18:36:50.075441       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:36:50.075696       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:36:50.075715       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 18:37:34.355247       1 client.go:360] parsed scheme: "passthrough"
	I0927 18:37:34.355299       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 18:37:34.355313       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [0ca3bb4254dee146e1fa336141461b9e05d7d5888aa4384eeca8d2bbc3947055] <==
	I0927 18:29:04.823214       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0927 18:29:04.842256       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7l2lp"
	I0927 18:29:04.842672       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gccpt"
	I0927 18:29:04.848151       1 shared_informer.go:247] Caches are synced for stateful set 
	I0927 18:29:04.848568       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0927 18:29:04.857406       1 shared_informer.go:247] Caches are synced for resource quota 
	I0927 18:29:04.902365       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-g47l8"
	I0927 18:29:04.907960       1 shared_informer.go:247] Caches are synced for PV protection 
	I0927 18:29:04.908117       1 shared_informer.go:247] Caches are synced for PVC protection 
	E0927 18:29:04.918151       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0927 18:29:04.919194       1 shared_informer.go:247] Caches are synced for resource quota 
	I0927 18:29:04.934580       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0927 18:29:04.935604       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-btnnf"
	I0927 18:29:04.953679       1 shared_informer.go:247] Caches are synced for attach detach 
	I0927 18:29:04.981215       1 shared_informer.go:247] Caches are synced for expand 
	I0927 18:29:05.060062       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0927 18:29:05.360455       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0927 18:29:05.383151       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0927 18:29:05.383220       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0927 18:29:06.252128       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0927 18:29:06.278873       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-g47l8"
	I0927 18:29:09.710214       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0927 18:31:05.308734       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0927 18:31:05.448036       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0927 18:31:05.489311       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [a007a778324de3ee7b7afce2e87acafd2950db583243681a3da2d06f9e000e0b] <==
	E0927 18:33:32.565413       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:33:38.163382       1 request.go:655] Throttling request took 1.048227734s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 18:33:39.014921       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:34:03.067436       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:34:10.665528       1 request.go:655] Throttling request took 1.048290076s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0927 18:34:11.517281       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:34:33.569160       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:34:43.167740       1 request.go:655] Throttling request took 1.048085497s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0927 18:34:44.019202       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:35:04.071103       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:35:15.669752       1 request.go:655] Throttling request took 1.048251264s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 18:35:16.521350       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:35:34.573329       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:35:48.171945       1 request.go:655] Throttling request took 1.047685483s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 18:35:49.023353       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:36:05.075393       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:36:20.675750       1 request.go:655] Throttling request took 1.048361567s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0927 18:36:21.527744       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:36:35.579833       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:36:53.178225       1 request.go:655] Throttling request took 1.044980803s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 18:36:54.031471       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:37:06.081818       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 18:37:25.683072       1 request.go:655] Throttling request took 1.048408778s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0927 18:37:26.534866       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 18:37:36.619850       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [4671e4d05905406bbb96bfcdbb2fb1cd585e6e11c6753369a4498908ff0b01dd] <==
	I0927 18:29:06.643445       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0927 18:29:06.643655       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0927 18:29:06.666117       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0927 18:29:06.666375       1 server_others.go:185] Using iptables Proxier.
	I0927 18:29:06.666976       1 server.go:650] Version: v1.20.0
	I0927 18:29:06.668154       1 config.go:315] Starting service config controller
	I0927 18:29:06.668264       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0927 18:29:06.668410       1 config.go:224] Starting endpoint slice config controller
	I0927 18:29:06.668496       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0927 18:29:06.768461       1 shared_informer.go:247] Caches are synced for service config 
	I0927 18:29:06.768665       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [fd7c8b16aaa1630639e5c92e06583ec5a1db88c90ca77e9d153cf42cffafcd67] <==
	I0927 18:31:45.752352       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0927 18:31:45.752433       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0927 18:31:45.778766       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0927 18:31:45.778938       1 server_others.go:185] Using iptables Proxier.
	I0927 18:31:45.779282       1 server.go:650] Version: v1.20.0
	I0927 18:31:45.780145       1 config.go:315] Starting service config controller
	I0927 18:31:45.780154       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0927 18:31:45.780195       1 config.go:224] Starting endpoint slice config controller
	I0927 18:31:45.780203       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0927 18:31:45.880276       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0927 18:31:45.880346       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [51d0717d23f66141256b9ea94ea8bb6a6f1b7110a5ca38e4e61bfc94a6749546] <==
	I0927 18:31:36.309794       1 serving.go:331] Generated self-signed cert in-memory
	W0927 18:31:42.974889       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:31:42.974930       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:31:42.974939       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:31:42.974946       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:31:43.243288       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0927 18:31:43.297791       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:31:43.297818       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:31:43.301790       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0927 18:31:43.398093       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9d696b48541d894c859c64e76593ba39e3a5d0822f144c00d7ae1e669d8bd289] <==
	I0927 18:28:41.536605       1 serving.go:331] Generated self-signed cert in-memory
	W0927 18:28:45.845227       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:28:45.845293       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:28:45.845307       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:28:45.845314       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:28:45.985655       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0927 18:28:45.997345       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:28:45.997567       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:28:45.997714       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0927 18:28:46.026358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 18:28:46.027205       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:46.027304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 18:28:46.027386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:46.027470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 18:28:46.030042       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 18:28:46.030160       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 18:28:46.030238       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:28:46.031517       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 18:28:46.035596       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 18:28:46.035662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 18:28:46.045736       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 18:28:46.880243       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:47.067854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0927 18:28:47.597792       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 27 18:36:03 old-k8s-version-313926 kubelet[657]: E0927 18:36:03.135275     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:36:15 old-k8s-version-313926 kubelet[657]: E0927 18:36:15.145862     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: I0927 18:36:17.134246     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:36:17 old-k8s-version-313926 kubelet[657]: E0927 18:36:17.134586     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:36:26 old-k8s-version-313926 kubelet[657]: E0927 18:36:26.134898     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: I0927 18:36:31.134514     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:36:31 old-k8s-version-313926 kubelet[657]: E0927 18:36:31.135626     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:36:37 old-k8s-version-313926 kubelet[657]: E0927 18:36:37.135146     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: I0927 18:36:44.134713     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:36:44 old-k8s-version-313926 kubelet[657]: E0927 18:36:44.135538     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:36:52 old-k8s-version-313926 kubelet[657]: E0927 18:36:52.136573     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: I0927 18:36:58.134288     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:36:58 old-k8s-version-313926 kubelet[657]: E0927 18:36:58.134617     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:37:06 old-k8s-version-313926 kubelet[657]: E0927 18:37:06.141600     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: I0927 18:37:12.134629     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:37:12 old-k8s-version-313926 kubelet[657]: E0927 18:37:12.135076     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:37:17 old-k8s-version-313926 kubelet[657]: E0927 18:37:17.138317     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 18:37:24 old-k8s-version-313926 kubelet[657]: I0927 18:37:24.134979     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:37:24 old-k8s-version-313926 kubelet[657]: E0927 18:37:24.135768     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	Sep 27 18:37:30 old-k8s-version-313926 kubelet[657]: E0927 18:37:30.162620     657 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 27 18:37:30 old-k8s-version-313926 kubelet[657]: E0927 18:37:30.163097     657 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 27 18:37:30 old-k8s-version-313926 kubelet[657]: E0927 18:37:30.163342     657 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-m6vt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-cft95_kube-system(17c8faf
4-4aec-4195-b8eb-e009fca00d0e): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 27 18:37:30 old-k8s-version-313926 kubelet[657]: E0927 18:37:30.163632     657 pod_workers.go:191] Error syncing pod 17c8faf4-4aec-4195-b8eb-e009fca00d0e ("metrics-server-9975d5f86-cft95_kube-system(17c8faf4-4aec-4195-b8eb-e009fca00d0e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 27 18:37:36 old-k8s-version-313926 kubelet[657]: I0927 18:37:36.134405     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 0c8756f9ba31846b4b43e6df7dd27a36b47b4aeec8a233431cb6a998aec33c66
	Sep 27 18:37:36 old-k8s-version-313926 kubelet[657]: E0927 18:37:36.134899     657 pod_workers.go:191] Error syncing pod b374d488-878e-4aab-85b2-422fecd7eadf ("dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gm9vd_kubernetes-dashboard(b374d488-878e-4aab-85b2-422fecd7eadf)"
	
	
	==> kubernetes-dashboard [b502a59de9a326580d25cc92d7a2204245eec88a59d7dc7265fcdef287ee546a] <==
	2024/09/27 18:32:09 Using namespace: kubernetes-dashboard
	2024/09/27 18:32:09 Using in-cluster config to connect to apiserver
	2024/09/27 18:32:09 Using secret token for csrf signing
	2024/09/27 18:32:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/27 18:32:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/27 18:32:09 Successful initial request to the apiserver, version: v1.20.0
	2024/09/27 18:32:09 Generating JWE encryption key
	2024/09/27 18:32:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/27 18:32:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/27 18:32:09 Initializing JWE encryption key from synchronized object
	2024/09/27 18:32:09 Creating in-cluster Sidecar client
	2024/09/27 18:32:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:32:09 Serving insecurely on HTTP port: 9090
	2024/09/27 18:32:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:33:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:33:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:34:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:34:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:35:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:35:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:36:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:36:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 18:32:09 Starting overwatch
	
	
	==> storage-provisioner [86ac70820330350cace70a7ce803f53ecefaddb674b24a73b8afc3932ad88fef] <==
	I0927 18:31:45.682679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0927 18:32:15.684655       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e99ef95a69b41345c999ba0e2aed220b434429643972a4c498dcdd19ed7f32e0] <==
	I0927 18:32:32.287465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 18:32:32.315352       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 18:32:32.315680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 18:32:49.805357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 18:32:49.805665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-313926_028cc2e7-7840-4d36-a5f8-63e6f8ed64a5!
	I0927 18:32:49.813351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0c642421-c525-4766-bd8d-85986829d59f", APIVersion:"v1", ResourceVersion:"866", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-313926_028cc2e7-7840-4d36-a5f8-63e6f8ed64a5 became leader
	I0927 18:32:49.906002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-313926_028cc2e7-7840-4d36-a5f8-63e6f8ed64a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-313926 -n old-k8s-version-313926
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-313926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-cft95
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-313926 describe pod metrics-server-9975d5f86-cft95
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-313926 describe pod metrics-server-9975d5f86-cft95: exit status 1 (102.670524ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-cft95" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-313926 describe pod metrics-server-9975d5f86-cft95: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.89s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.25
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.31.1/json-events 5.95
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 220.15
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 16.97
34 TestAddons/parallel/Ingress 19.5
35 TestAddons/parallel/InspektorGadget 12
36 TestAddons/parallel/MetricsServer 6.79
38 TestAddons/parallel/CSI 39.25
39 TestAddons/parallel/Headlamp 16.97
40 TestAddons/parallel/CloudSpanner 6.66
41 TestAddons/parallel/LocalPath 54.32
42 TestAddons/parallel/NvidiaDevicePlugin 5.98
43 TestAddons/parallel/Yakd 11.97
44 TestAddons/StoppedEnableDisable 12.24
45 TestCertOptions 38.21
46 TestCertExpiration 226.19
48 TestForceSystemdFlag 32.94
49 TestForceSystemdEnv 40.16
50 TestDockerEnvContainerd 45.99
55 TestErrorSpam/setup 33.49
56 TestErrorSpam/start 0.71
57 TestErrorSpam/status 0.99
58 TestErrorSpam/pause 1.88
59 TestErrorSpam/unpause 1.82
60 TestErrorSpam/stop 1.47
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 54.53
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.14
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.25
72 TestFunctional/serial/CacheCmd/cache/add_local 1.46
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 47.67
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.72
83 TestFunctional/serial/LogsFileCmd 1.81
84 TestFunctional/serial/InvalidService 4.52
86 TestFunctional/parallel/ConfigCmd 0.55
87 TestFunctional/parallel/DashboardCmd 9.53
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.13
94 TestFunctional/parallel/ServiceCmdConnect 9.64
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 25.24
98 TestFunctional/parallel/SSHCmd 0.66
99 TestFunctional/parallel/CpCmd 2.48
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 2.09
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
110 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.51
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ServiceCmd/List 0.64
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
126 TestFunctional/parallel/ProfileCmd/profile_list 0.5
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
129 TestFunctional/parallel/MountCmd/any-port 7.74
130 TestFunctional/parallel/ServiceCmd/Format 0.61
131 TestFunctional/parallel/ServiceCmd/URL 0.51
132 TestFunctional/parallel/MountCmd/specific-port 2.39
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
134 TestFunctional/parallel/Version/short 0.09
135 TestFunctional/parallel/Version/components 1.41
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.72
141 TestFunctional/parallel/ImageCommands/Setup 0.86
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.03
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 121.97
159 TestMultiControlPlane/serial/DeployApp 31.57
160 TestMultiControlPlane/serial/PingHostFromPods 1.72
161 TestMultiControlPlane/serial/AddWorkerNode 22.71
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
164 TestMultiControlPlane/serial/CopyFile 19.09
165 TestMultiControlPlane/serial/StopSecondaryNode 13.11
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.2
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 113.14
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.57
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
172 TestMultiControlPlane/serial/StopCluster 37.18
173 TestMultiControlPlane/serial/RestartCluster 67.3
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
175 TestMultiControlPlane/serial/AddSecondaryNode 42.19
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
180 TestJSONOutput/start/Command 91.72
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.77
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.76
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.83
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 42.61
206 TestKicCustomNetwork/use_default_bridge_network 31.79
207 TestKicExistingNetwork 32.47
208 TestKicCustomSubnet 34.76
209 TestKicStaticIP 34.3
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 66.45
214 TestMountStart/serial/StartWithMountFirst 6.58
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 9.05
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.64
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.22
221 TestMountStart/serial/RestartStopped 7.31
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 102.5
226 TestMultiNode/serial/DeployApp2Nodes 19.16
227 TestMultiNode/serial/PingHostFrom2Pods 0.98
228 TestMultiNode/serial/AddNode 15.94
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.69
231 TestMultiNode/serial/CopyFile 9.88
232 TestMultiNode/serial/StopNode 2.23
233 TestMultiNode/serial/StartAfterStop 9.39
234 TestMultiNode/serial/RestartKeepsNodes 92.94
235 TestMultiNode/serial/DeleteNode 5.69
236 TestMultiNode/serial/StopMultiNode 24.43
237 TestMultiNode/serial/RestartMultiNode 55.34
238 TestMultiNode/serial/ValidateNameConflict 32.2
243 TestPreload 123.89
245 TestScheduledStopUnix 106.09
248 TestInsufficientStorage 10.4
249 TestRunningBinaryUpgrade 80.08
251 TestKubernetesUpgrade 355.06
252 TestMissingContainerUpgrade 188.75
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 36.33
256 TestNoKubernetes/serial/StartWithStopK8s 19.43
257 TestNoKubernetes/serial/Start 11.49
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
259 TestNoKubernetes/serial/ProfileList 0.95
260 TestNoKubernetes/serial/Stop 1.21
261 TestNoKubernetes/serial/StartNoArgs 6.78
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
263 TestStoppedBinaryUpgrade/Setup 0.91
264 TestStoppedBinaryUpgrade/Upgrade 111.01
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
274 TestPause/serial/Start 89.33
282 TestNetworkPlugins/group/false 4.14
283 TestPause/serial/SecondStartNoReconfiguration 7.38
287 TestPause/serial/Pause 1.09
288 TestPause/serial/VerifyStatus 0.42
289 TestPause/serial/Unpause 0.93
290 TestPause/serial/PauseAgain 1.11
291 TestPause/serial/DeletePaused 3.21
292 TestPause/serial/VerifyDeletedResources 0.46
294 TestStartStop/group/old-k8s-version/serial/FirstStart 172.23
296 TestStartStop/group/no-preload/serial/FirstStart 70.84
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.81
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
299 TestStartStop/group/old-k8s-version/serial/Stop 12.37
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/no-preload/serial/DeployApp 8.37
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
304 TestStartStop/group/no-preload/serial/Stop 12.46
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/no-preload/serial/SecondStart 272.53
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
310 TestStartStop/group/no-preload/serial/Pause 3.16
312 TestStartStop/group/embed-certs/serial/FirstStart 92.09
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
316 TestStartStop/group/old-k8s-version/serial/Pause 2.91
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.89
319 TestStartStop/group/embed-certs/serial/DeployApp 10.4
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
321 TestStartStop/group/embed-certs/serial/Stop 12.1
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
324 TestStartStop/group/embed-certs/serial/SecondStart 266.12
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.52
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.61
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.72
334 TestStartStop/group/newest-cni/serial/FirstStart 39.04
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.66
339 TestNetworkPlugins/group/auto/Start 97.31
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.92
342 TestStartStop/group/newest-cni/serial/Stop 1.27
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/newest-cni/serial/SecondStart 24.1
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
348 TestStartStop/group/newest-cni/serial/Pause 3.81
349 TestNetworkPlugins/group/kindnet/Start 51.59
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/auto/KubeletFlags 0.28
352 TestNetworkPlugins/group/auto/NetCatPod 11.37
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
354 TestNetworkPlugins/group/kindnet/NetCatPod 9.33
355 TestNetworkPlugins/group/auto/DNS 0.39
356 TestNetworkPlugins/group/auto/Localhost 0.18
357 TestNetworkPlugins/group/auto/HairPin 0.17
358 TestNetworkPlugins/group/kindnet/DNS 0.24
359 TestNetworkPlugins/group/kindnet/Localhost 0.24
360 TestNetworkPlugins/group/kindnet/HairPin 0.17
361 TestNetworkPlugins/group/calico/Start 76.89
362 TestNetworkPlugins/group/custom-flannel/Start 63.06
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.35
365 TestNetworkPlugins/group/custom-flannel/DNS 0.27
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.38
370 TestNetworkPlugins/group/calico/NetCatPod 12.42
371 TestNetworkPlugins/group/calico/DNS 0.26
372 TestNetworkPlugins/group/calico/Localhost 0.21
373 TestNetworkPlugins/group/calico/HairPin 0.22
374 TestNetworkPlugins/group/enable-default-cni/Start 78.15
375 TestNetworkPlugins/group/flannel/Start 55.29
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
380 TestNetworkPlugins/group/flannel/NetCatPod 9.41
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
384 TestNetworkPlugins/group/flannel/DNS 0.22
385 TestNetworkPlugins/group/flannel/Localhost 0.21
386 TestNetworkPlugins/group/flannel/HairPin 0.24
387 TestNetworkPlugins/group/bridge/Start 45.02
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 9.26
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-324473 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-324473 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.246442138s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 17:39:39.727313  299395 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0927 17:39:39.727394  299395 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-324473
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-324473: exit status 85 (67.25446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-324473 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |          |
	|         | -p download-only-324473        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:39:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:39:32.529382  299400 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:39:32.529788  299400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:32.529797  299400 out.go:358] Setting ErrFile to fd 2...
	I0927 17:39:32.529803  299400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:32.530071  299400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	W0927 17:39:32.530211  299400 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19712-294006/.minikube/config/config.json: open /home/jenkins/minikube-integration/19712-294006/.minikube/config/config.json: no such file or directory
	I0927 17:39:32.530615  299400 out.go:352] Setting JSON to true
	I0927 17:39:32.531468  299400 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4924,"bootTime":1727453849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 17:39:32.531549  299400 start.go:139] virtualization:  
	I0927 17:39:32.534560  299400 out.go:97] [download-only-324473] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0927 17:39:32.534868  299400 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 17:39:32.534922  299400 notify.go:220] Checking for updates...
	I0927 17:39:32.536723  299400 out.go:169] MINIKUBE_LOCATION=19712
	I0927 17:39:32.538913  299400 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:39:32.541078  299400 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:39:32.542920  299400 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 17:39:32.544702  299400 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 17:39:32.548691  299400 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 17:39:32.549017  299400 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:39:32.570569  299400 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:39:32.570674  299400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:32.626948  299400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 17:39:32.617642512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:32.627157  299400 docker.go:318] overlay module found
	I0927 17:39:32.629161  299400 out.go:97] Using the docker driver based on user configuration
	I0927 17:39:32.629196  299400 start.go:297] selected driver: docker
	I0927 17:39:32.629203  299400 start.go:901] validating driver "docker" against <nil>
	I0927 17:39:32.629349  299400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:32.676366  299400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 17:39:32.66643465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:32.676578  299400 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:39:32.676850  299400 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 17:39:32.677003  299400 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 17:39:32.678969  299400 out.go:169] Using Docker driver with root privileges
	I0927 17:39:32.680672  299400 cni.go:84] Creating CNI manager for ""
	I0927 17:39:32.680737  299400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 17:39:32.680751  299400 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:39:32.680841  299400 start.go:340] cluster config:
	{Name:download-only-324473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-324473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:39:32.682564  299400 out.go:97] Starting "download-only-324473" primary control-plane node in "download-only-324473" cluster
	I0927 17:39:32.682591  299400 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 17:39:32.684252  299400 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 17:39:32.684276  299400 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 17:39:32.684434  299400 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 17:39:32.700397  299400 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 17:39:32.700573  299400 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 17:39:32.700669  299400 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 17:39:32.748499  299400 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0927 17:39:32.748526  299400 cache.go:56] Caching tarball of preloaded images
	I0927 17:39:32.748698  299400 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0927 17:39:32.750763  299400 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 17:39:32.750791  299400 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0927 17:39:32.835481  299400 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0927 17:39:36.974579  299400 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0927 17:39:36.974696  299400 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-324473 host does not exist
	  To start a cluster, run: "minikube start -p download-only-324473"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-324473
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-496322 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-496322 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.948513998s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 17:39:46.099343  299395 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0927 17:39:46.099381  299395 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-496322
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-496322: exit status 85 (70.398666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-324473 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | -p download-only-324473        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| delete  | -p download-only-324473        | download-only-324473 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC | 27 Sep 24 17:39 UTC |
	| start   | -o=json --download-only        | download-only-496322 | jenkins | v1.34.0 | 27 Sep 24 17:39 UTC |                     |
	|         | -p download-only-496322        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:39:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:39:40.198626  299600 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:39:40.198760  299600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:40.198770  299600 out.go:358] Setting ErrFile to fd 2...
	I0927 17:39:40.198775  299600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:39:40.199052  299600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 17:39:40.199503  299600 out.go:352] Setting JSON to true
	I0927 17:39:40.200401  299600 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4932,"bootTime":1727453849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 17:39:40.200483  299600 start.go:139] virtualization:  
	I0927 17:39:40.202704  299600 out.go:97] [download-only-496322] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 17:39:40.202898  299600 notify.go:220] Checking for updates...
	I0927 17:39:40.204782  299600 out.go:169] MINIKUBE_LOCATION=19712
	I0927 17:39:40.206652  299600 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:39:40.208415  299600 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:39:40.210063  299600 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 17:39:40.211801  299600 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 17:39:40.215177  299600 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 17:39:40.215539  299600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:39:40.238109  299600 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:39:40.238230  299600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:40.306637  299600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 17:39:40.296020696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:40.306757  299600 docker.go:318] overlay module found
	I0927 17:39:40.308557  299600 out.go:97] Using the docker driver based on user configuration
	I0927 17:39:40.308582  299600 start.go:297] selected driver: docker
	I0927 17:39:40.308589  299600 start.go:901] validating driver "docker" against <nil>
	I0927 17:39:40.308712  299600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:39:40.356248  299600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 17:39:40.346670669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:39:40.356465  299600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:39:40.356726  299600 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 17:39:40.356904  299600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 17:39:40.359055  299600 out.go:169] Using Docker driver with root privileges
	I0927 17:39:40.360972  299600 cni.go:84] Creating CNI manager for ""
	I0927 17:39:40.361050  299600 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0927 17:39:40.361065  299600 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:39:40.361158  299600 start.go:340] cluster config:
	{Name:download-only-496322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-496322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:39:40.363057  299600 out.go:97] Starting "download-only-496322" primary control-plane node in "download-only-496322" cluster
	I0927 17:39:40.363082  299600 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0927 17:39:40.364749  299600 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 17:39:40.364812  299600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 17:39:40.364882  299600 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 17:39:40.379860  299600 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 17:39:40.380007  299600 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 17:39:40.380026  299600 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 17:39:40.380031  299600 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 17:39:40.380038  299600 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 17:39:40.422566  299600 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 17:39:40.422603  299600 cache.go:56] Caching tarball of preloaded images
	I0927 17:39:40.422770  299600 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0927 17:39:40.424957  299600 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 17:39:40.424985  299600 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0927 17:39:40.515613  299600 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0927 17:39:44.506124  299600 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0927 17:39:44.506231  299600 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19712-294006/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-496322 host does not exist
	  To start a cluster, run: "minikube start -p download-only-496322"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-496322
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 17:39:47.308042  299395 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-691798 --alsologtostderr --binary-mirror http://127.0.0.1:44453 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-691798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-691798
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-583947
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-583947: exit status 85 (60.327234ms)

                                                
                                                
-- stdout --
	* Profile "addons-583947" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-583947"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-583947
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-583947: exit status 85 (70.839908ms)

                                                
                                                
-- stdout --
	* Profile "addons-583947" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-583947"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (220.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-583947 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-583947 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m40.142797622s)
--- PASS: TestAddons/Setup (220.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-583947 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-583947 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 5.516287ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zg7pk" [c6e96250-7c38-480c-842d-2d5612850d9f] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004985502s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w654q" [5450ef2a-71a8-4be2-bd6d-c91f93d716b9] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003440373s
addons_test.go:338: (dbg) Run:  kubectl --context addons-583947 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-583947 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-583947 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.968020619s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 ip
2024/09/27 17:47:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-583947 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-583947 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-583947 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [dc8ac720-be42-4736-8ce1-52da2cdf7c98] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [dc8ac720-be42-4736-8ce1-52da2cdf7c98] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003977547s
I0927 17:48:38.610263  299395 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-583947 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable ingress-dns --alsologtostderr -v=1: (1.70111865s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable ingress --alsologtostderr -v=1: (7.829749805s)
--- PASS: TestAddons/parallel/Ingress (19.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x67ht" [a8dbb30d-a0bf-4314-99ff-9f1e6b2e84d0] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00395061s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-583947
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-583947: (5.99806841s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.619336ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h78zb" [866e445c-c734-4c72-b8c1-55af9c7258fd] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003693351s
addons_test.go:413: (dbg) Run:  kubectl --context addons-583947 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 17:47:50.651372  299395 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 17:47:50.657363  299395 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 17:47:50.657403  299395 kapi.go:107] duration metric: took 7.589435ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.601004ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-583947 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-583947 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [81389374-6918-48dc-ac4e-657d16e728d1] Pending
helpers_test.go:344: "task-pv-pod" [81389374-6918-48dc-ac4e-657d16e728d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [81389374-6918-48dc-ac4e-657d16e728d1] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00413453s
addons_test.go:528: (dbg) Run:  kubectl --context addons-583947 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-583947 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-583947 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-583947 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-583947 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-583947 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [da5355b3-4bbe-4090-a867-0f8c3e0d5cf4] Pending
helpers_test.go:344: "task-pv-pod-restore" [da5355b3-4bbe-4090-a867-0f8c3e0d5cf4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [da5355b3-4bbe-4090-a867-0f8c3e0d5cf4] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003394531s
addons_test.go:570: (dbg) Run:  kubectl --context addons-583947 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-583947 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-583947 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.829829167s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable volumesnapshots --alsologtostderr -v=1: (1.000641s)
--- PASS: TestAddons/parallel/CSI (39.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-583947 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-583947 --alsologtostderr -v=1: (1.059147836s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-skwfq" [4bf01a18-e1ff-49aa-a6d7-48fcf92de224] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-skwfq" [4bf01a18-e1ff-49aa-a6d7-48fcf92de224] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-skwfq" [4bf01a18-e1ff-49aa-a6d7-48fcf92de224] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004143316s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable headlamp --alsologtostderr -v=1: (5.907310267s)
--- PASS: TestAddons/parallel/Headlamp (16.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-979s9" [27b0025a-a8d7-4e2b-8fc5-466c27d00016] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.011591087s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-583947
--- PASS: TestAddons/parallel/CloudSpanner (6.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-583947 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-583947 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [907ebdf9-7cce-4cf0-86b7-4e495827f2ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [907ebdf9-7cce-4cf0-86b7-4e495827f2ab] Running
helpers_test.go:344: "test-local-path" [907ebdf9-7cce-4cf0-86b7-4e495827f2ab] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [907ebdf9-7cce-4cf0-86b7-4e495827f2ab] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003285304s
addons_test.go:938: (dbg) Run:  kubectl --context addons-583947 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 ssh "cat /opt/local-path-provisioner/pvc-351ad038-7d7f-447f-8f1a-7eb4b015d221_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-583947 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-583947 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.710681233s)
--- PASS: TestAddons/parallel/LocalPath (54.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9r5dq" [c3c31223-d780-4d40-836d-0fdcf06acfdf] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00515536s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-583947
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.98s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dm4sw" [7c0670da-3f6c-4d1a-b098-53dbea2cc26e] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008014052s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-583947 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-583947 addons disable yakd --alsologtostderr -v=1: (5.965107069s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-583947
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-583947: (11.982152309s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-583947
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-583947
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-583947
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (38.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-933427 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-933427 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.519730348s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-933427 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-933427 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-933427 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-933427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-933427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-933427: (1.974984812s)
--- PASS: TestCertOptions (38.21s)

                                                
                                    
x
+
TestCertExpiration (226.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-759007 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-759007 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.715719691s)
E0927 18:27:46.668078  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-759007 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-759007 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.108457095s)
helpers_test.go:175: Cleaning up "cert-expiration-759007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-759007
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-759007: (2.366419129s)
--- PASS: TestCertExpiration (226.19s)

                                                
                                    
x
+
TestForceSystemdFlag (32.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-711153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-711153 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.565344861s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-711153 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-711153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-711153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-711153: (2.067082449s)
--- PASS: TestForceSystemdFlag (32.94s)

                                                
                                    
x
+
TestForceSystemdEnv (40.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-641482 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-641482 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.359860728s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-641482 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-641482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-641482
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-641482: (2.35842174s)
--- PASS: TestForceSystemdEnv (40.16s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.99s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-855817 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-855817 --driver=docker  --container-runtime=containerd: (30.455257288s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-855817"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xU26QDliKqfK/agent.322419" SSH_AGENT_PID="322420" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xU26QDliKqfK/agent.322419" SSH_AGENT_PID="322420" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xU26QDliKqfK/agent.322419" SSH_AGENT_PID="322420" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.318052862s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-xU26QDliKqfK/agent.322419" SSH_AGENT_PID="322420" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-855817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-855817
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-855817: (1.91583457s)
--- PASS: TestDockerEnvContainerd (45.99s)

                                                
                                    
x
+
TestErrorSpam/setup (33.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-544984 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-544984 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-544984 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-544984 --driver=docker  --container-runtime=containerd: (33.486090827s)
--- PASS: TestErrorSpam/setup (33.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 pause
--- PASS: TestErrorSpam/pause (1.88s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 stop: (1.275014251s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-544984 --log_dir /tmp/nospam-544984 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19712-294006/.minikube/files/etc/test/nested/copy/299395/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-427306 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (54.52555471s)
--- PASS: TestFunctional/serial/StartWithProxy (54.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 17:51:34.258765  299395 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-427306 --alsologtostderr -v=8: (6.142478784s)
functional_test.go:663: soft start took 6.143863486s for "functional-427306" cluster.
I0927 17:51:40.401561  299395 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-427306 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:3.1: (1.559810474s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:3.3: (1.425823968s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 cache add registry.k8s.io/pause:latest: (1.266911979s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-427306 /tmp/TestFunctionalserialCacheCmdcacheadd_local2636802896/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache add minikube-local-cache-test:functional-427306
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache delete minikube-local-cache-test:functional-427306
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-427306
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.076972ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 cache reload: (1.030878057s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 kubectl -- --context functional-427306 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-427306 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-427306 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.664747821s)
functional_test.go:761: restart took 47.664856949s for "functional-427306" cluster.
I0927 17:52:36.709358  299395 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-427306 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 logs: (1.715380126s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 logs --file /tmp/TestFunctionalserialLogsFileCmd878979604/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 logs --file /tmp/TestFunctionalserialLogsFileCmd878979604/001/logs.txt: (1.805208656s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-427306 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-427306
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-427306: exit status 115 (661.604643ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32471 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-427306 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 config get cpus: exit status 14 (82.03952ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 config get cpus: exit status 14 (72.178251ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-427306 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-427306 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 337143: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-427306 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (181.85583ms)

                                                
                                                
-- stdout --
	* [functional-427306] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:53:16.712081  336844 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:53:16.712200  336844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:53:16.712235  336844 out.go:358] Setting ErrFile to fd 2...
	I0927 17:53:16.712255  336844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:53:16.712528  336844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 17:53:16.712912  336844 out.go:352] Setting JSON to false
	I0927 17:53:16.713956  336844 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5748,"bootTime":1727453849,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 17:53:16.714034  336844 start.go:139] virtualization:  
	I0927 17:53:16.716872  336844 out.go:177] * [functional-427306] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 17:53:16.718946  336844 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:53:16.720897  336844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:53:16.721057  336844 notify.go:220] Checking for updates...
	I0927 17:53:16.724702  336844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:53:16.726853  336844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 17:53:16.729083  336844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 17:53:16.735757  336844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:53:16.741963  336844 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:53:16.742525  336844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:53:16.770753  336844 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:53:16.770930  336844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:53:16.822220  336844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 17:53:16.810653816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:53:16.822788  336844 docker.go:318] overlay module found
	I0927 17:53:16.825181  336844 out.go:177] * Using the docker driver based on existing profile
	I0927 17:53:16.827425  336844 start.go:297] selected driver: docker
	I0927 17:53:16.827442  336844 start.go:901] validating driver "docker" against &{Name:functional-427306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-427306 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:53:16.827552  336844 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:53:16.830270  336844 out.go:201] 
	W0927 17:53:16.832182  336844 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 17:53:16.834312  336844 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-427306 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-427306 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (186.91794ms)

                                                
                                                
-- stdout --
	* [functional-427306] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:53:16.519422  336800 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:53:16.519640  336800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:53:16.519669  336800 out.go:358] Setting ErrFile to fd 2...
	I0927 17:53:16.519690  336800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:53:16.520658  336800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 17:53:16.521166  336800 out.go:352] Setting JSON to false
	I0927 17:53:16.522309  336800 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5748,"bootTime":1727453849,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 17:53:16.522420  336800 start.go:139] virtualization:  
	I0927 17:53:16.525699  336800 out.go:177] * [functional-427306] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0927 17:53:16.528773  336800 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:53:16.528814  336800 notify.go:220] Checking for updates...
	I0927 17:53:16.533467  336800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:53:16.535734  336800 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 17:53:16.538199  336800 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 17:53:16.540596  336800 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 17:53:16.542718  336800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:53:16.545674  336800 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:53:16.546217  336800 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:53:16.575342  336800 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 17:53:16.575516  336800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:53:16.635341  336800 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 17:53:16.625811121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:53:16.635480  336800 docker.go:318] overlay module found
	I0927 17:53:16.638438  336800 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 17:53:16.641407  336800 start.go:297] selected driver: docker
	I0927 17:53:16.641431  336800 start.go:901] validating driver "docker" against &{Name:functional-427306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-427306 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:53:16.642327  336800 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:53:16.645161  336800 out.go:201] 
	W0927 17:53:16.647520  336800 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 17:53:16.650353  336800 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-427306 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-427306 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qc5g5" [939ed133-4c09-40d4-bf59-95fae3db0d26] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qc5g5" [939ed133-4c09-40d4-bf59-95fae3db0d26] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004309683s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31102
functional_test.go:1675: http://192.168.49.2:31102: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-qc5g5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31102
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9cab5767-ec9f-4b65-873f-dedd23ea48b0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004126685s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-427306 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-427306 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-427306 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-427306 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a582631c-403a-4e5c-bd45-e9b6ad6ab9b3] Pending
helpers_test.go:344: "sp-pod" [a582631c-403a-4e5c-bd45-e9b6ad6ab9b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a582631c-403a-4e5c-bd45-e9b6ad6ab9b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003870836s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-427306 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-427306 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-427306 delete -f testdata/storage-provisioner/pod.yaml: (1.187571143s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-427306 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [998ff739-2f2b-410d-a3fc-f1300ce808a8] Pending
helpers_test.go:344: "sp-pod" [998ff739-2f2b-410d-a3fc-f1300ce808a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [998ff739-2f2b-410d-a3fc-f1300ce808a8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003577895s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-427306 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh -n functional-427306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cp functional-427306:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1054915705/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh -n functional-427306 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh -n functional-427306 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/299395/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /etc/test/nested/copy/299395/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/299395.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /etc/ssl/certs/299395.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/299395.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /usr/share/ca-certificates/299395.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2993952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /etc/ssl/certs/2993952.pem"
E0927 17:53:28.104983  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.111752  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.123096  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.145361  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.188918  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.271636  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:53:28.433669  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2993952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /usr/share/ca-certificates/2993952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-427306 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo systemctl is-active docker"
2024/09/27 17:53:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh "sudo systemctl is-active docker": exit status 1 (281.102575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh "sudo systemctl is-active crio": exit status 1 (286.044965ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 334270: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-427306 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cde2154d-9a96-4e7b-8e56-c63cfb575b42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cde2154d-9a96-4e7b-8e56-c63cfb575b42] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004422897s
I0927 17:52:56.176178  299395 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-427306 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.75.178 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-427306 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-427306 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-427306 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9wvqb" [b828952a-a380-4c37-9bc2-aec11583aa1c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9wvqb" [b828952a-a380-4c37-9bc2-aec11583aa1c] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005192052s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service list -o json
functional_test.go:1494: Took "602.522376ms" to run "out/minikube-linux-arm64 -p functional-427306 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "439.483464ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "61.317656ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "402.869575ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "67.731199ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32265
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdany-port2230203864/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727459594097334621" to /tmp/TestFunctionalparallelMountCmdany-port2230203864/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727459594097334621" to /tmp/TestFunctionalparallelMountCmdany-port2230203864/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727459594097334621" to /tmp/TestFunctionalparallelMountCmdany-port2230203864/001/test-1727459594097334621
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 17:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 17:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 17:53 test-1727459594097334621
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh cat /mount-9p/test-1727459594097334621
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-427306 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3fc8dc08-06a7-4b6f-9efe-cd36ea096765] Pending
helpers_test.go:344: "busybox-mount" [3fc8dc08-06a7-4b6f-9efe-cd36ea096765] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3fc8dc08-06a7-4b6f-9efe-cd36ea096765] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3fc8dc08-06a7-4b6f-9efe-cd36ea096765] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004633823s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-427306 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdany-port2230203864/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32265
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdspecific-port2325675138/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (451.66748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:53:22.287102  299395 retry.go:31] will retry after 696.724748ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdspecific-port2325675138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh "sudo umount -f /mount-9p": exit status 1 (373.867092ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-427306 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdspecific-port2325675138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T" /mount1: (1.075259611s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-427306 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-427306 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268962054/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 version -o=json --components
E0927 17:53:29.398131  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 version -o=json --components: (1.407133187s)
--- PASS: TestFunctional/parallel/Version/components (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-427306 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-427306
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-427306
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-427306 image ls --format short --alsologtostderr:
I0927 17:53:33.831174  339652 out.go:345] Setting OutFile to fd 1 ...
I0927 17:53:33.831410  339652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:33.831454  339652 out.go:358] Setting ErrFile to fd 2...
I0927 17:53:33.831478  339652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:33.831758  339652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
I0927 17:53:33.832521  339652 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:33.832683  339652 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:33.833189  339652 cli_runner.go:164] Run: docker container inspect functional-427306 --format={{.State.Status}}
I0927 17:53:33.854361  339652 ssh_runner.go:195] Run: systemctl --version
I0927 17:53:33.854428  339652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-427306
I0927 17:53:33.879862  339652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/functional-427306/id_rsa Username:docker}
I0927 17:53:33.977854  339652 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-427306 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:6e8672 | 67.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| docker.io/kicbase/echo-server               | functional-427306  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| docker.io/library/minikube-local-cache-test | functional-427306  | sha256:89359a | 992B   |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-427306 image ls --format table --alsologtostderr:
I0927 17:53:34.143717  339719 out.go:345] Setting OutFile to fd 1 ...
I0927 17:53:34.143906  339719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.143918  339719 out.go:358] Setting ErrFile to fd 2...
I0927 17:53:34.143924  339719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.144210  339719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
I0927 17:53:34.145013  339719 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.145183  339719 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.145783  339719 cli_runner.go:164] Run: docker container inspect functional-427306 --format={{.State.Status}}
I0927 17:53:34.171096  339719 ssh_runner.go:195] Run: systemctl --version
I0927 17:53:34.171156  339719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-427306
I0927 17:53:34.197437  339719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/functional-427306/id_rsa Username:docker}
I0927 17:53:34.290782  339719 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-427306 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-427306"],"size":"2173567"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:89359a928204e50d681e0405ed98419465f9f631a54e62a8fcee7285392a9953","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-427306"],"size":"992"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker
.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/cored
ns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1
dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:dd8c1960fb53442fe45ac7e43591fec01c82d05895494c44cde543a62ec4ef2e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67693717"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["regist
ry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-427306 image ls --format json --alsologtostderr:
I0927 17:53:34.139584  339715 out.go:345] Setting OutFile to fd 1 ...
I0927 17:53:34.139849  339715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.139884  339715 out.go:358] Setting ErrFile to fd 2...
I0927 17:53:34.139974  339715 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.140408  339715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
I0927 17:53:34.141402  339715 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.141597  339715 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.142280  339715 cli_runner.go:164] Run: docker container inspect functional-427306 --format={{.State.Status}}
I0927 17:53:34.169732  339715 ssh_runner.go:195] Run: systemctl --version
I0927 17:53:34.169814  339715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-427306
I0927 17:53:34.190847  339715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/functional-427306/id_rsa Username:docker}
I0927 17:53:34.282872  339715 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-427306 image ls --format yaml --alsologtostderr:
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:89359a928204e50d681e0405ed98419465f9f631a54e62a8fcee7285392a9953
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-427306
size: "992"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:dd8c1960fb53442fe45ac7e43591fec01c82d05895494c44cde543a62ec4ef2e
repoTags:
- docker.io/library/nginx:latest
size: "67693717"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-427306
size: "2173567"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-427306 image ls --format yaml --alsologtostderr:
I0927 17:53:33.842109  339653 out.go:345] Setting OutFile to fd 1 ...
I0927 17:53:33.842445  339653 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:33.842460  339653 out.go:358] Setting ErrFile to fd 2...
I0927 17:53:33.842467  339653 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:33.844602  339653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
I0927 17:53:33.845364  339653 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:33.845529  339653 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:33.846054  339653 cli_runner.go:164] Run: docker container inspect functional-427306 --format={{.State.Status}}
I0927 17:53:33.870084  339653 ssh_runner.go:195] Run: systemctl --version
I0927 17:53:33.870155  339653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-427306
I0927 17:53:33.891258  339653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/functional-427306/id_rsa Username:docker}
I0927 17:53:33.983384  339653 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-427306 ssh pgrep buildkitd: exit status 1 (267.757651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image build -t localhost/my-image:functional-427306 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 image build -t localhost/my-image:functional-427306 testdata/build --alsologtostderr: (3.210184749s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-427306 image build -t localhost/my-image:functional-427306 testdata/build --alsologtostderr:
I0927 17:53:34.675388  339839 out.go:345] Setting OutFile to fd 1 ...
I0927 17:53:34.676028  339839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.676069  339839 out.go:358] Setting ErrFile to fd 2...
I0927 17:53:34.676091  339839 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:53:34.676398  339839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
I0927 17:53:34.677182  339839 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.678312  339839 config.go:182] Loaded profile config "functional-427306": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0927 17:53:34.678860  339839 cli_runner.go:164] Run: docker container inspect functional-427306 --format={{.State.Status}}
I0927 17:53:34.697159  339839 ssh_runner.go:195] Run: systemctl --version
I0927 17:53:34.697331  339839 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-427306
I0927 17:53:34.714225  339839 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/functional-427306/id_rsa Username:docker}
I0927 17:53:34.809870  339839 build_images.go:161] Building image from path: /tmp/build.2438395984.tar
I0927 17:53:34.809944  339839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 17:53:34.819249  339839 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2438395984.tar
I0927 17:53:34.823202  339839 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2438395984.tar: stat -c "%s %y" /var/lib/minikube/build/build.2438395984.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2438395984.tar': No such file or directory
I0927 17:53:34.823238  339839 ssh_runner.go:362] scp /tmp/build.2438395984.tar --> /var/lib/minikube/build/build.2438395984.tar (3072 bytes)
I0927 17:53:34.850377  339839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2438395984
I0927 17:53:34.859657  339839 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2438395984 -xf /var/lib/minikube/build/build.2438395984.tar
I0927 17:53:34.869282  339839 containerd.go:394] Building image: /var/lib/minikube/build/build.2438395984
I0927 17:53:34.869396  339839 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2438395984 --local dockerfile=/var/lib/minikube/build/build.2438395984 --output type=image,name=localhost/my-image:functional-427306
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:2ba441360f4bd14964b4a02e4c25030f2c3a7390ad20151d71bd84c1f3108939
#8 exporting manifest sha256:2ba441360f4bd14964b4a02e4c25030f2c3a7390ad20151d71bd84c1f3108939 0.0s done
#8 exporting config sha256:6a4b1a0bf7713a8bfd842d68dc2618c5fe6195fc23360cc16192f4619fa55601 0.0s done
#8 naming to localhost/my-image:functional-427306 done
#8 DONE 0.1s
I0927 17:53:37.804944  339839 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2438395984 --local dockerfile=/var/lib/minikube/build/build.2438395984 --output type=image,name=localhost/my-image:functional-427306: (2.935516258s)
I0927 17:53:37.805023  339839 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2438395984
I0927 17:53:37.814333  339839 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2438395984.tar
I0927 17:53:37.823528  339839 build_images.go:217] Built localhost/my-image:functional-427306 from /tmp/build.2438395984.tar
I0927 17:53:37.823601  339839 build_images.go:133] succeeded building to: functional-427306
I0927 17:53:37.823611  339839 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-427306
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 update-context --alsologtostderr -v=2
E0927 17:53:30.679998  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr: (1.150152368s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
E0927 17:53:28.755943  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr: (1.073955181s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-427306
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-427306 image load --daemon kicbase/echo-server:functional-427306 --alsologtostderr: (1.002850718s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image save kicbase/echo-server:functional-427306 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image rm kicbase/echo-server:functional-427306 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image ls
E0927 17:53:33.241326  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-427306
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-427306 image save --daemon kicbase/echo-server:functional-427306 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-427306
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-427306
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-427306
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-427306
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950679 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 17:53:48.604554  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:09.086512  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:54:50.048034  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-950679 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m1.006780818s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-950679 -- rollout status deployment/busybox: (28.727618475s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0927 17:56:11.969942  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-6ndzq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-hw5sw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-z5c95 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-6ndzq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-hw5sw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-z5c95 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-6ndzq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-hw5sw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-z5c95 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-6ndzq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-6ndzq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-hw5sw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-hw5sw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-z5c95 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-950679 -- exec busybox-7dff88458-z5c95 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-950679 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-950679 -v=7 --alsologtostderr: (21.667127232s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr: (1.042718847s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-950679 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.013218696s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 status --output json -v=7 --alsologtostderr: (1.024187327s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp testdata/cp-test.txt ha-950679:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2368562469/001/cp-test_ha-950679.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679:/home/docker/cp-test.txt ha-950679-m02:/home/docker/cp-test_ha-950679_ha-950679-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test_ha-950679_ha-950679-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679:/home/docker/cp-test.txt ha-950679-m03:/home/docker/cp-test_ha-950679_ha-950679-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test_ha-950679_ha-950679-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679:/home/docker/cp-test.txt ha-950679-m04:/home/docker/cp-test_ha-950679_ha-950679-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test_ha-950679_ha-950679-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp testdata/cp-test.txt ha-950679-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2368562469/001/cp-test_ha-950679-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m02:/home/docker/cp-test.txt ha-950679:/home/docker/cp-test_ha-950679-m02_ha-950679.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test_ha-950679-m02_ha-950679.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m02:/home/docker/cp-test.txt ha-950679-m03:/home/docker/cp-test_ha-950679-m02_ha-950679-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test_ha-950679-m02_ha-950679-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m02:/home/docker/cp-test.txt ha-950679-m04:/home/docker/cp-test_ha-950679-m02_ha-950679-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test_ha-950679-m02_ha-950679-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp testdata/cp-test.txt ha-950679-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2368562469/001/cp-test_ha-950679-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m03:/home/docker/cp-test.txt ha-950679:/home/docker/cp-test_ha-950679-m03_ha-950679.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test_ha-950679-m03_ha-950679.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m03:/home/docker/cp-test.txt ha-950679-m02:/home/docker/cp-test_ha-950679-m03_ha-950679-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test_ha-950679-m03_ha-950679-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m03:/home/docker/cp-test.txt ha-950679-m04:/home/docker/cp-test_ha-950679-m03_ha-950679-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test_ha-950679-m03_ha-950679-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp testdata/cp-test.txt ha-950679-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2368562469/001/cp-test_ha-950679-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m04:/home/docker/cp-test.txt ha-950679:/home/docker/cp-test_ha-950679-m04_ha-950679.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679 "sudo cat /home/docker/cp-test_ha-950679-m04_ha-950679.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m04:/home/docker/cp-test.txt ha-950679-m02:/home/docker/cp-test_ha-950679-m04_ha-950679-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m02 "sudo cat /home/docker/cp-test_ha-950679-m04_ha-950679-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 cp ha-950679-m04:/home/docker/cp-test.txt ha-950679-m03:/home/docker/cp-test_ha-950679-m04_ha-950679-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 ssh -n ha-950679-m03 "sudo cat /home/docker/cp-test_ha-950679-m04_ha-950679-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 node stop m02 -v=7 --alsologtostderr: (12.356310015s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr: exit status 7 (751.303577ms)

                                                
                                                
-- stdout --
	ha-950679
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-950679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950679-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-950679-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:57:11.460007  356007 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:57:11.460152  356007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:57:11.460164  356007 out.go:358] Setting ErrFile to fd 2...
	I0927 17:57:11.460169  356007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:57:11.460445  356007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 17:57:11.460629  356007 out.go:352] Setting JSON to false
	I0927 17:57:11.460659  356007 mustload.go:65] Loading cluster: ha-950679
	I0927 17:57:11.460763  356007 notify.go:220] Checking for updates...
	I0927 17:57:11.461070  356007 config.go:182] Loaded profile config "ha-950679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 17:57:11.461083  356007 status.go:174] checking status of ha-950679 ...
	I0927 17:57:11.461978  356007 cli_runner.go:164] Run: docker container inspect ha-950679 --format={{.State.Status}}
	I0927 17:57:11.480433  356007 status.go:364] ha-950679 host status = "Running" (err=<nil>)
	I0927 17:57:11.480461  356007 host.go:66] Checking if "ha-950679" exists ...
	I0927 17:57:11.480775  356007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950679
	I0927 17:57:11.502788  356007 host.go:66] Checking if "ha-950679" exists ...
	I0927 17:57:11.503121  356007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:57:11.503211  356007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950679
	I0927 17:57:11.523510  356007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/ha-950679/id_rsa Username:docker}
	I0927 17:57:11.619169  356007 ssh_runner.go:195] Run: systemctl --version
	I0927 17:57:11.623952  356007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:57:11.637718  356007 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 17:57:11.697415  356007 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-27 17:57:11.68225543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 17:57:11.698019  356007 kubeconfig.go:125] found "ha-950679" server: "https://192.168.49.254:8443"
	I0927 17:57:11.698046  356007 api_server.go:166] Checking apiserver status ...
	I0927 17:57:11.698090  356007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:57:11.710893  356007 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	I0927 17:57:11.721067  356007 api_server.go:182] apiserver freezer: "6:freezer:/docker/b55265e6de6806d89144708420b9ac1d6b173d62b5f6d29b877967540a3eaf2b/kubepods/burstable/pod199177893a743eb9b800a9ae593cb830/e81070a32e47701001dee8e08bb12efbf8dae00ef5281f16d2d72097ddba669f"
	I0927 17:57:11.721153  356007 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b55265e6de6806d89144708420b9ac1d6b173d62b5f6d29b877967540a3eaf2b/kubepods/burstable/pod199177893a743eb9b800a9ae593cb830/e81070a32e47701001dee8e08bb12efbf8dae00ef5281f16d2d72097ddba669f/freezer.state
	I0927 17:57:11.731121  356007 api_server.go:204] freezer state: "THAWED"
	I0927 17:57:11.731153  356007 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 17:57:11.740257  356007 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 17:57:11.740288  356007 status.go:456] ha-950679 apiserver status = Running (err=<nil>)
	I0927 17:57:11.740300  356007 status.go:176] ha-950679 status: &{Name:ha-950679 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:57:11.740332  356007 status.go:174] checking status of ha-950679-m02 ...
	I0927 17:57:11.740677  356007 cli_runner.go:164] Run: docker container inspect ha-950679-m02 --format={{.State.Status}}
	I0927 17:57:11.758220  356007 status.go:364] ha-950679-m02 host status = "Stopped" (err=<nil>)
	I0927 17:57:11.758247  356007 status.go:377] host is not running, skipping remaining checks
	I0927 17:57:11.758254  356007 status.go:176] ha-950679-m02 status: &{Name:ha-950679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:57:11.758275  356007 status.go:174] checking status of ha-950679-m03 ...
	I0927 17:57:11.758625  356007 cli_runner.go:164] Run: docker container inspect ha-950679-m03 --format={{.State.Status}}
	I0927 17:57:11.779232  356007 status.go:364] ha-950679-m03 host status = "Running" (err=<nil>)
	I0927 17:57:11.779256  356007 host.go:66] Checking if "ha-950679-m03" exists ...
	I0927 17:57:11.779564  356007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950679-m03
	I0927 17:57:11.807565  356007 host.go:66] Checking if "ha-950679-m03" exists ...
	I0927 17:57:11.808046  356007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:57:11.808101  356007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950679-m03
	I0927 17:57:11.826595  356007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/ha-950679-m03/id_rsa Username:docker}
	I0927 17:57:11.918479  356007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:57:11.930610  356007 kubeconfig.go:125] found "ha-950679" server: "https://192.168.49.254:8443"
	I0927 17:57:11.930641  356007 api_server.go:166] Checking apiserver status ...
	I0927 17:57:11.930681  356007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:57:11.942825  356007 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup
	I0927 17:57:11.952521  356007 api_server.go:182] apiserver freezer: "6:freezer:/docker/84d05aa9d9f586804c6eee260156107c5239f13ba6c0a3b98129326f399e5486/kubepods/burstable/podc0f6179cb481afd0dfc67cb506e263fa/c5bfb875042f197b0c2a432bcab51051e95276dfa65775333222673bf557840a"
	I0927 17:57:11.952617  356007 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/84d05aa9d9f586804c6eee260156107c5239f13ba6c0a3b98129326f399e5486/kubepods/burstable/podc0f6179cb481afd0dfc67cb506e263fa/c5bfb875042f197b0c2a432bcab51051e95276dfa65775333222673bf557840a/freezer.state
	I0927 17:57:11.961795  356007 api_server.go:204] freezer state: "THAWED"
	I0927 17:57:11.961827  356007 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 17:57:11.969856  356007 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 17:57:11.969896  356007 status.go:456] ha-950679-m03 apiserver status = Running (err=<nil>)
	I0927 17:57:11.969924  356007 status.go:176] ha-950679-m03 status: &{Name:ha-950679-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 17:57:11.969950  356007 status.go:174] checking status of ha-950679-m04 ...
	I0927 17:57:11.970265  356007 cli_runner.go:164] Run: docker container inspect ha-950679-m04 --format={{.State.Status}}
	I0927 17:57:11.989804  356007 status.go:364] ha-950679-m04 host status = "Running" (err=<nil>)
	I0927 17:57:11.989831  356007 host.go:66] Checking if "ha-950679-m04" exists ...
	I0927 17:57:11.990152  356007 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-950679-m04
	I0927 17:57:12.011970  356007 host.go:66] Checking if "ha-950679-m04" exists ...
	I0927 17:57:12.012309  356007 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 17:57:12.012360  356007 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-950679-m04
	I0927 17:57:12.045119  356007 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/ha-950679-m04/id_rsa Username:docker}
	I0927 17:57:12.143708  356007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:57:12.156864  356007 status.go:176] ha-950679-m04 status: &{Name:ha-950679-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 node start m02 -v=7 --alsologtostderr: (16.999359023s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr: (1.08039174s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (113.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-950679 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-950679 -v=7 --alsologtostderr
E0927 17:57:46.668504  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.674829  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.686292  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.707721  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.749218  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.830740  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:46.995274  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:47.317116  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:47.958697  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:49.240145  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:51.802449  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:57:56.923947  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:58:07.166172  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-950679 -v=7 --alsologtostderr: (37.381251766s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950679 --wait=true -v=7 --alsologtostderr
E0927 17:58:27.647558  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:58:28.104420  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:58:55.811528  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:59:08.609817  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-950679 --wait=true -v=7 --alsologtostderr: (1m15.610311794s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-950679
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (113.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 node delete m03 -v=7 --alsologtostderr: (9.63712004s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 stop -v=7 --alsologtostderr: (37.075076943s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr: exit status 7 (100.538686ms)

                                                
                                                
-- stdout --
	ha-950679
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950679-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-950679-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:00:13.653982  370362 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:00:13.654427  370362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:00:13.654442  370362 out.go:358] Setting ErrFile to fd 2...
	I0927 18:00:13.654448  370362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:00:13.654710  370362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:00:13.654906  370362 out.go:352] Setting JSON to false
	I0927 18:00:13.654936  370362 mustload.go:65] Loading cluster: ha-950679
	I0927 18:00:13.655377  370362 config.go:182] Loaded profile config "ha-950679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 18:00:13.655400  370362 status.go:174] checking status of ha-950679 ...
	I0927 18:00:13.655957  370362 cli_runner.go:164] Run: docker container inspect ha-950679 --format={{.State.Status}}
	I0927 18:00:13.656227  370362 notify.go:220] Checking for updates...
	I0927 18:00:13.673056  370362 status.go:364] ha-950679 host status = "Stopped" (err=<nil>)
	I0927 18:00:13.673082  370362 status.go:377] host is not running, skipping remaining checks
	I0927 18:00:13.673088  370362 status.go:176] ha-950679 status: &{Name:ha-950679 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:00:13.673113  370362 status.go:174] checking status of ha-950679-m02 ...
	I0927 18:00:13.673454  370362 cli_runner.go:164] Run: docker container inspect ha-950679-m02 --format={{.State.Status}}
	I0927 18:00:13.689026  370362 status.go:364] ha-950679-m02 host status = "Stopped" (err=<nil>)
	I0927 18:00:13.689047  370362 status.go:377] host is not running, skipping remaining checks
	I0927 18:00:13.689053  370362 status.go:176] ha-950679-m02 status: &{Name:ha-950679-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:00:13.689074  370362 status.go:174] checking status of ha-950679-m04 ...
	I0927 18:00:13.689479  370362 cli_runner.go:164] Run: docker container inspect ha-950679-m04 --format={{.State.Status}}
	I0927 18:00:13.708454  370362 status.go:364] ha-950679-m04 host status = "Stopped" (err=<nil>)
	I0927 18:00:13.708474  370362 status.go:377] host is not running, skipping remaining checks
	I0927 18:00:13.708481  370362 status.go:176] ha-950679-m04 status: &{Name:ha-950679-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-950679 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 18:00:30.531078  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-950679 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.335789816s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-950679 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-950679 --control-plane -v=7 --alsologtostderr: (41.14268376s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-950679 status -v=7 --alsologtostderr: (1.043855608s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-306277 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0927 18:02:46.668288  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:03:14.373416  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:03:28.104321  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-306277 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m31.714897324s)
--- PASS: TestJSONOutput/start/Command (91.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-306277 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-306277 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-306277 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-306277 --output=json --user=testUser: (5.831820643s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-696020 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-696020 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.308151ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9bebdfd9-592e-4e91-94cf-1b386ba113a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-696020] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb186159-3a28-43de-839a-a94c77ee4cfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"96c9a5e4-8c48-4e33-99d9-c1ceb4ff7f1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc685843-a11a-4a5f-9561-de1d77114d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig"}}
	{"specversion":"1.0","id":"1d1ef429-7cc6-4156-a262-1d9be4c4ef57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube"}}
	{"specversion":"1.0","id":"919e68e2-bcbd-463b-9fe2-a57492d30101","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fbe856f0-54ad-4100-8cea-08d4617c810a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd854bf4-14cd-41b9-a738-2e4de1b3cd1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-696020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-696020
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-103149 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-103149 --network=: (40.551648882s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-103149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-103149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-103149: (2.031497034s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-489296 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-489296 --network=bridge: (29.851608759s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-489296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-489296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-489296: (1.919389282s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.79s)

                                                
                                    
x
+
TestKicExistingNetwork (32.47s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 18:05:10.686593  299395 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 18:05:10.702435  299395 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 18:05:10.702520  299395 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 18:05:10.702539  299395 cli_runner.go:164] Run: docker network inspect existing-network
W0927 18:05:10.717634  299395 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 18:05:10.717665  299395 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 18:05:10.717679  299395 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 18:05:10.717780  299395 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 18:05:10.733800  299395 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-adf1e8729b5f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:0f:46:76:c6} reservation:<nil>}
I0927 18:05:10.734157  299395 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000413a40}
I0927 18:05:10.734185  299395 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 18:05:10.734247  299395 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 18:05:10.802914  299395 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-939115 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-939115 --network=existing-network: (30.408843619s)
helpers_test.go:175: Cleaning up "existing-network-939115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-939115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-939115: (1.913030045s)
I0927 18:05:43.139876  299395 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.47s)

                                                
                                    
x
+
TestKicCustomSubnet (34.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-579926 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-579926 --subnet=192.168.60.0/24: (32.678746994s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-579926 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-579926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-579926
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-579926: (2.060063668s)
--- PASS: TestKicCustomSubnet (34.76s)

                                                
                                    
x
+
TestKicStaticIP (34.3s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-320675 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-320675 --static-ip=192.168.200.200: (32.03805089s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-320675 ip
helpers_test.go:175: Cleaning up "static-ip-320675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-320675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-320675: (2.081350165s)
--- PASS: TestKicStaticIP (34.30s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-112564 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-112564 --driver=docker  --container-runtime=containerd: (29.887998999s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-115263 --driver=docker  --container-runtime=containerd
E0927 18:07:46.669739  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-115263 --driver=docker  --container-runtime=containerd: (31.292801545s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-112564
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-115263
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-115263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-115263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-115263: (2.018849703s)
helpers_test.go:175: Cleaning up "first-112564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-112564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-112564: (1.911550491s)
--- PASS: TestMinikubeProfile (66.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-227205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-227205 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.577614123s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-227205 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-228961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-228961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.048587649s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-227205 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-227205 --alsologtostderr -v=5: (1.642021296s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-228961
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-228961: (1.217910455s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-228961
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-228961: (6.301593169s)
--- PASS: TestMountStart/serial/RestartStopped (7.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228961 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-751636 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 18:08:28.104109  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:09:51.173120  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-751636 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m42.006551909s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-751636 -- rollout status deployment/busybox: (17.230914902s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-qbdz7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-szjv6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-qbdz7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-szjv6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-qbdz7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-szjv6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-qbdz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-qbdz7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-szjv6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-751636 -- exec busybox-7dff88458-szjv6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-751636 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-751636 -v 3 --alsologtostderr: (15.064926785s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-751636 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp testdata/cp-test.txt multinode-751636:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844062087/001/cp-test_multinode-751636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636:/home/docker/cp-test.txt multinode-751636-m02:/home/docker/cp-test_multinode-751636_multinode-751636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test_multinode-751636_multinode-751636-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636:/home/docker/cp-test.txt multinode-751636-m03:/home/docker/cp-test_multinode-751636_multinode-751636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test_multinode-751636_multinode-751636-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp testdata/cp-test.txt multinode-751636-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844062087/001/cp-test_multinode-751636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m02:/home/docker/cp-test.txt multinode-751636:/home/docker/cp-test_multinode-751636-m02_multinode-751636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test_multinode-751636-m02_multinode-751636.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m02:/home/docker/cp-test.txt multinode-751636-m03:/home/docker/cp-test_multinode-751636-m02_multinode-751636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test_multinode-751636-m02_multinode-751636-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp testdata/cp-test.txt multinode-751636-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1844062087/001/cp-test_multinode-751636-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m03:/home/docker/cp-test.txt multinode-751636:/home/docker/cp-test_multinode-751636-m03_multinode-751636.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636 "sudo cat /home/docker/cp-test_multinode-751636-m03_multinode-751636.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 cp multinode-751636-m03:/home/docker/cp-test.txt multinode-751636-m02:/home/docker/cp-test_multinode-751636-m03_multinode-751636-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 ssh -n multinode-751636-m02 "sudo cat /home/docker/cp-test_multinode-751636-m03_multinode-751636-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-751636 node stop m03: (1.21861703s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-751636 status: exit status 7 (510.066325ms)

                                                
                                                
-- stdout --
	multinode-751636
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-751636-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-751636-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr: exit status 7 (497.662504ms)

                                                
                                                
-- stdout --
	multinode-751636
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-751636-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-751636-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:10:58.307079  423892 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:10:58.307297  423892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:10:58.307320  423892 out.go:358] Setting ErrFile to fd 2...
	I0927 18:10:58.307341  423892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:10:58.307603  423892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:10:58.307819  423892 out.go:352] Setting JSON to false
	I0927 18:10:58.307875  423892 mustload.go:65] Loading cluster: multinode-751636
	I0927 18:10:58.307956  423892 notify.go:220] Checking for updates...
	I0927 18:10:58.309464  423892 config.go:182] Loaded profile config "multinode-751636": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 18:10:58.309524  423892 status.go:174] checking status of multinode-751636 ...
	I0927 18:10:58.310424  423892 cli_runner.go:164] Run: docker container inspect multinode-751636 --format={{.State.Status}}
	I0927 18:10:58.329454  423892 status.go:364] multinode-751636 host status = "Running" (err=<nil>)
	I0927 18:10:58.329477  423892 host.go:66] Checking if "multinode-751636" exists ...
	I0927 18:10:58.329799  423892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-751636
	I0927 18:10:58.347131  423892 host.go:66] Checking if "multinode-751636" exists ...
	I0927 18:10:58.347450  423892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:10:58.347495  423892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-751636
	I0927 18:10:58.373870  423892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/multinode-751636/id_rsa Username:docker}
	I0927 18:10:58.466446  423892 ssh_runner.go:195] Run: systemctl --version
	I0927 18:10:58.470926  423892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:10:58.483215  423892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:10:58.532514  423892 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-27 18:10:58.521932082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:10:58.533132  423892 kubeconfig.go:125] found "multinode-751636" server: "https://192.168.67.2:8443"
	I0927 18:10:58.533163  423892 api_server.go:166] Checking apiserver status ...
	I0927 18:10:58.533204  423892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:10:58.544534  423892 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	I0927 18:10:58.554259  423892 api_server.go:182] apiserver freezer: "6:freezer:/docker/8b26d6d59fb1fff84bc400ec27c91cf8e943788acfd740c12a27137a268f91a8/kubepods/burstable/pod486ad02127f6c75ba859375bed71d85e/0f3ee71e831180be074be1cf4cae0d9b3d4053876e0d3d235f0361aae01bd912"
	I0927 18:10:58.554336  423892 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b26d6d59fb1fff84bc400ec27c91cf8e943788acfd740c12a27137a268f91a8/kubepods/burstable/pod486ad02127f6c75ba859375bed71d85e/0f3ee71e831180be074be1cf4cae0d9b3d4053876e0d3d235f0361aae01bd912/freezer.state
	I0927 18:10:58.563432  423892 api_server.go:204] freezer state: "THAWED"
	I0927 18:10:58.563463  423892 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 18:10:58.571537  423892 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 18:10:58.571571  423892 status.go:456] multinode-751636 apiserver status = Running (err=<nil>)
	I0927 18:10:58.571582  423892 status.go:176] multinode-751636 status: &{Name:multinode-751636 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:10:58.571599  423892 status.go:174] checking status of multinode-751636-m02 ...
	I0927 18:10:58.571924  423892 cli_runner.go:164] Run: docker container inspect multinode-751636-m02 --format={{.State.Status}}
	I0927 18:10:58.588813  423892 status.go:364] multinode-751636-m02 host status = "Running" (err=<nil>)
	I0927 18:10:58.588839  423892 host.go:66] Checking if "multinode-751636-m02" exists ...
	I0927 18:10:58.589143  423892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-751636-m02
	I0927 18:10:58.609964  423892 host.go:66] Checking if "multinode-751636-m02" exists ...
	I0927 18:10:58.610302  423892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:10:58.610382  423892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-751636-m02
	I0927 18:10:58.628419  423892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19712-294006/.minikube/machines/multinode-751636-m02/id_rsa Username:docker}
	I0927 18:10:58.718198  423892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:10:58.730047  423892 status.go:176] multinode-751636-m02 status: &{Name:multinode-751636-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:10:58.730084  423892 status.go:174] checking status of multinode-751636-m03 ...
	I0927 18:10:58.730401  423892 cli_runner.go:164] Run: docker container inspect multinode-751636-m03 --format={{.State.Status}}
	I0927 18:10:58.746492  423892 status.go:364] multinode-751636-m03 host status = "Stopped" (err=<nil>)
	I0927 18:10:58.746518  423892 status.go:377] host is not running, skipping remaining checks
	I0927 18:10:58.746525  423892 status.go:176] multinode-751636-m03 status: &{Name:multinode-751636-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-751636 node start m03 -v=7 --alsologtostderr: (8.634398706s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (92.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-751636
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-751636
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-751636: (24.981356658s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-751636 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-751636 --wait=true -v=8 --alsologtostderr: (1m7.819409892s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-751636
--- PASS: TestMultiNode/serial/RestartKeepsNodes (92.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-751636 node delete m03: (4.981743137s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
E0927 18:12:46.667927  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-751636 stop: (24.228431184s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-751636 status: exit status 7 (96.063157ms)

                                                
                                                
-- stdout --
	multinode-751636
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-751636-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr: exit status 7 (104.03283ms)

                                                
                                                
-- stdout --
	multinode-751636
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-751636-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:13:11.147972  432321 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:13:11.148139  432321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:13:11.148148  432321 out.go:358] Setting ErrFile to fd 2...
	I0927 18:13:11.148154  432321 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:13:11.148405  432321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:13:11.148607  432321 out.go:352] Setting JSON to false
	I0927 18:13:11.148640  432321 mustload.go:65] Loading cluster: multinode-751636
	I0927 18:13:11.148682  432321 notify.go:220] Checking for updates...
	I0927 18:13:11.149068  432321 config.go:182] Loaded profile config "multinode-751636": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 18:13:11.149083  432321 status.go:174] checking status of multinode-751636 ...
	I0927 18:13:11.149784  432321 cli_runner.go:164] Run: docker container inspect multinode-751636 --format={{.State.Status}}
	I0927 18:13:11.168671  432321 status.go:364] multinode-751636 host status = "Stopped" (err=<nil>)
	I0927 18:13:11.168697  432321 status.go:377] host is not running, skipping remaining checks
	I0927 18:13:11.168704  432321 status.go:176] multinode-751636 status: &{Name:multinode-751636 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:13:11.168737  432321 status.go:174] checking status of multinode-751636-m02 ...
	I0927 18:13:11.169071  432321 cli_runner.go:164] Run: docker container inspect multinode-751636-m02 --format={{.State.Status}}
	I0927 18:13:11.194904  432321 status.go:364] multinode-751636-m02 host status = "Stopped" (err=<nil>)
	I0927 18:13:11.194929  432321 status.go:377] host is not running, skipping remaining checks
	I0927 18:13:11.194936  432321 status.go:176] multinode-751636-m02 status: &{Name:multinode-751636-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-751636 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0927 18:13:28.103701  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-751636 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.66828668s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-751636 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-751636
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-751636-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-751636-m02 --driver=docker  --container-runtime=containerd: exit status 14 (67.512712ms)

                                                
                                                
-- stdout --
	* [multinode-751636-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-751636-m02' is duplicated with machine name 'multinode-751636-m02' in profile 'multinode-751636'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-751636-m03 --driver=docker  --container-runtime=containerd
E0927 18:14:09.736129  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-751636-m03 --driver=docker  --container-runtime=containerd: (29.804848301s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-751636
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-751636: exit status 80 (302.349921ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-751636 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-751636-m03 already exists in multinode-751636-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-751636-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-751636-m03: (1.978029702s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.20s)

                                                
                                    
x
+
TestPreload (123.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-004494 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-004494 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.867668554s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-004494 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-004494 image pull gcr.io/k8s-minikube/busybox: (2.188247173s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-004494
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-004494: (12.176770677s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-004494 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-004494 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.944800335s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-004494 image list
helpers_test.go:175: Cleaning up "test-preload-004494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-004494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-004494: (2.365916769s)
--- PASS: TestPreload (123.89s)

                                                
                                    
x
+
TestScheduledStopUnix (106.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-726133 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-726133 --memory=2048 --driver=docker  --container-runtime=containerd: (29.697184164s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726133 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-726133 -n scheduled-stop-726133
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726133 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 18:17:16.851632  299395 retry.go:31] will retry after 132.388µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.852835  299395 retry.go:31] will retry after 152.448µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.853919  299395 retry.go:31] will retry after 116.199µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.854149  299395 retry.go:31] will retry after 170.716µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.855007  299395 retry.go:31] will retry after 522.901µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.856131  299395 retry.go:31] will retry after 595.729µs: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.857289  299395 retry.go:31] will retry after 1.057136ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.858470  299395 retry.go:31] will retry after 2.074648ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.861679  299395 retry.go:31] will retry after 3.728689ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.866091  299395 retry.go:31] will retry after 3.665218ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.870316  299395 retry.go:31] will retry after 3.396443ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.874535  299395 retry.go:31] will retry after 12.021508ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.886701  299395 retry.go:31] will retry after 11.032621ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.897867  299395 retry.go:31] will retry after 11.383051ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.910083  299395 retry.go:31] will retry after 17.038524ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.928057  299395 retry.go:31] will retry after 33.889327ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
I0927 18:17:16.962306  299395 retry.go:31] will retry after 69.117923ms: open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/scheduled-stop-726133/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726133 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726133 -n scheduled-stop-726133
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-726133
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-726133 --schedule 15s
E0927 18:17:46.668869  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-726133
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-726133: exit status 7 (70.732799ms)

                                                
                                                
-- stdout --
	scheduled-stop-726133
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726133 -n scheduled-stop-726133
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-726133 -n scheduled-stop-726133: exit status 7 (67.799368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-726133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-726133
E0927 18:18:28.103591  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-726133: (4.722144405s)
--- PASS: TestScheduledStopUnix (106.09s)

                                                
                                    
x
+
TestInsufficientStorage (10.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-414769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-414769 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.973062026s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d3d89a78-88d0-4721-9301-015bfc9e41a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-414769] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a20459eb-7b00-4a62-95d6-fd69b34a58b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"fdcf9935-e4d6-4ddc-8cfa-7a4bf3173ee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1530c816-3a43-4ce2-bb17-74334706ee32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig"}}
	{"specversion":"1.0","id":"0cf53c5f-37b0-424a-b09d-c1d4694d316e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube"}}
	{"specversion":"1.0","id":"35d65ca4-b53f-40dc-939e-d3aef69707ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d8bc4a61-e69d-4374-a3fb-5ab348278ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c0c93f85-abe3-46d3-a672-da6c3a5b8033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"756debfa-7f8c-421c-9ffe-5c8570260ae1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"19487222-f1a8-4940-9117-6bc24e4bc176","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"59a50835-6484-4035-b097-df9e31c31468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"09fbc416-f9b0-4c12-bda2-28400fa05e5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-414769\" primary control-plane node in \"insufficient-storage-414769\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2dd6f5b-e273-4be1-8a6c-63adf5116c90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8f7985c-36c9-4cd6-bde9-eb19beebb8ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a66e1fb4-6e3d-448d-a0b1-ac74f6385962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-414769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-414769 --output=json --layout=cluster: exit status 7 (274.662584ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-414769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-414769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 18:18:40.959197  450927 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-414769" does not appear in /home/jenkins/minikube-integration/19712-294006/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-414769 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-414769 --output=json --layout=cluster: exit status 7 (286.473104ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-414769","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-414769","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 18:18:41.249531  450988 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-414769" does not appear in /home/jenkins/minikube-integration/19712-294006/kubeconfig
	E0927 18:18:41.260228  450988 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/insufficient-storage-414769/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-414769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-414769
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-414769: (1.86460133s)
--- PASS: TestInsufficientStorage (10.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.08s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1215555626 start -p running-upgrade-502906 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1215555626 start -p running-upgrade-502906 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (36.701800668s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-502906 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-502906 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.57507213s)
helpers_test.go:175: Cleaning up "running-upgrade-502906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-502906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-502906: (2.584342229s)
--- PASS: TestRunningBinaryUpgrade (80.08s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.331710766s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-778745
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-778745: (1.317700545s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-778745 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-778745 status --format={{.Host}}: exit status 7 (110.974921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m43.020163864s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-778745 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (90.926898ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-778745] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-778745
	    minikube start -p kubernetes-upgrade-778745 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7787452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-778745 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-778745 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.880943282s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-778745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-778745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-778745: (2.165657598s)
--- PASS: TestKubernetesUpgrade (355.06s)

                                                
                                    
x
+
TestMissingContainerUpgrade (188.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2577764175 start -p missing-upgrade-672843 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2577764175 start -p missing-upgrade-672843 --memory=2200 --driver=docker  --container-runtime=containerd: (1m43.21157586s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-672843
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-672843: (10.300206455s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-672843
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-672843 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-672843 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.858896513s)
helpers_test.go:175: Cleaning up "missing-upgrade-672843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-672843
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-672843: (2.443490408s)
--- PASS: TestMissingContainerUpgrade (188.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (78.392139ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-130362] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130362 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130362 --driver=docker  --container-runtime=containerd: (35.887713047s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-130362 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.009615526s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-130362 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-130362 status -o json: exit status 2 (332.826287ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-130362","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-130362
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-130362: (2.087990127s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130362 --no-kubernetes --driver=docker  --container-runtime=containerd: (11.483754586s)
--- PASS: TestNoKubernetes/serial/Start (11.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-130362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-130362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.505577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-130362
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-130362: (1.205988164s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-130362 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-130362 --driver=docker  --container-runtime=containerd: (6.781455941s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-130362 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-130362 "sudo systemctl is-active --quiet service kubelet": exit status 1 (255.950025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.380121349 start -p stopped-upgrade-726368 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.380121349 start -p stopped-upgrade-726368 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.202767588s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.380121349 -p stopped-upgrade-726368 stop
E0927 18:22:46.667953  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.380121349 -p stopped-upgrade-726368 stop: (20.13768947s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-726368 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0927 18:23:28.104003  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-726368 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.667842716s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-726368
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-726368: (1.409355741s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (89.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-454178 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-454178 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m29.327850987s)
--- PASS: TestPause/serial/Start (89.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-988927 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-988927 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (197.568444ms)

                                                
                                                
-- stdout --
	* [false-988927] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:26:33.739642  490126 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:26:33.739799  490126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:26:33.739818  490126 out.go:358] Setting ErrFile to fd 2...
	I0927 18:26:33.739823  490126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:26:33.740093  490126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-294006/.minikube/bin
	I0927 18:26:33.740488  490126 out.go:352] Setting JSON to false
	I0927 18:26:33.741493  490126 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7745,"bootTime":1727453849,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0927 18:26:33.741567  490126 start.go:139] virtualization:  
	I0927 18:26:33.744513  490126 out.go:177] * [false-988927] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 18:26:33.747203  490126 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:26:33.747241  490126 notify.go:220] Checking for updates...
	I0927 18:26:33.751699  490126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:26:33.753457  490126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-294006/kubeconfig
	I0927 18:26:33.755339  490126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-294006/.minikube
	I0927 18:26:33.757095  490126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 18:26:33.758933  490126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:26:33.761316  490126 config.go:182] Loaded profile config "pause-454178": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0927 18:26:33.761419  490126 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:26:33.794978  490126 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 18:26:33.795159  490126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 18:26:33.870112  490126 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 18:26:33.858333334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 18:26:33.870237  490126 docker.go:318] overlay module found
	I0927 18:26:33.872875  490126 out.go:177] * Using the docker driver based on user configuration
	I0927 18:26:33.874564  490126 start.go:297] selected driver: docker
	I0927 18:26:33.874586  490126 start.go:901] validating driver "docker" against <nil>
	I0927 18:26:33.874602  490126 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:26:33.877221  490126 out.go:201] 
	W0927 18:26:33.879024  490126 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0927 18:26:33.880932  490126 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-988927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-454178
contexts:
- context:
cluster: pause-454178
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-454178
name: pause-454178
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-454178
user:
client-certificate: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.crt
client-key: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-988927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-988927"

                                                
                                                
----------------------- debugLogs end: false-988927 [took: 3.746926355s] --------------------------------
helpers_test.go:175: Cleaning up "false-988927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-988927
--- PASS: TestNetworkPlugins/group/false (4.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-454178 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-454178 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.353998179s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.38s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-454178 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-454178 --alsologtostderr -v=5: (1.093556772s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-454178 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-454178 --output=json --layout=cluster: exit status 2 (416.778325ms)

                                                
                                                
-- stdout --
	{"Name":"pause-454178","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-454178","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-454178 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-454178 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-454178 --alsologtostderr -v=5: (1.110077517s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-454178 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-454178 --alsologtostderr -v=5: (3.212806908s)
--- PASS: TestPause/serial/DeletePaused (3.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-454178
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-454178: exit status 1 (15.737178ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-454178: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-313926 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0927 18:28:28.103568  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-313926 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m52.225905474s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-446590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 18:30:49.737686  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-446590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m10.839230748s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-313926 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9bd9f28-2ddc-4c1a-b418-7c592b93bb2e] Pending
helpers_test.go:344: "busybox" [a9bd9f28-2ddc-4c1a-b418-7c592b93bb2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9bd9f28-2ddc-4c1a-b418-7c592b93bb2e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004726333s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-313926 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-313926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-313926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.212547188s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-313926 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-313926 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-313926 --alsologtostderr -v=3: (12.366704063s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-313926 -n old-k8s-version-313926
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-313926 -n old-k8s-version-313926: exit status 7 (67.598519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-313926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446590 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [543b03aa-9549-435c-b661-59dd4dcf19cc] Pending
helpers_test.go:344: "busybox" [543b03aa-9549-435c-b661-59dd4dcf19cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [543b03aa-9549-435c-b661-59dd4dcf19cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004492795s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-446590 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-446590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-446590 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025851203s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-446590 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-446590 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-446590 --alsologtostderr -v=3: (12.456575462s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446590 -n no-preload-446590
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446590 -n no-preload-446590: exit status 7 (72.56992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-446590 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (272.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-446590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 18:32:46.668567  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:33:28.104622  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-446590 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m32.169995026s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-446590 -n no-preload-446590
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (272.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lk5fk" [a01f6ece-2441-4125-a1f6-d6d048161e39] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003797281s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lk5fk" [a01f6ece-2441-4125-a1f6-d6d048161e39] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004210221s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-446590 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-446590 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-446590 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446590 -n no-preload-446590
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446590 -n no-preload-446590: exit status 2 (362.230279ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446590 -n no-preload-446590
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446590 -n no-preload-446590: exit status 2 (330.727208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-446590 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-446590 -n no-preload-446590
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-446590 -n no-preload-446590
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (92.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-437083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-437083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m32.090363732s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (92.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jmh4f" [d9f22115-1233-4829-91de-399519b07bb0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004102774s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-jmh4f" [d9f22115-1233-4829-91de-399519b07bb0] Running
E0927 18:37:46.668217  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00404388s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-313926 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-313926 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-313926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-313926 -n old-k8s-version-313926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-313926 -n old-k8s-version-313926: exit status 2 (321.585346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-313926 -n old-k8s-version-313926
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-313926 -n old-k8s-version-313926: exit status 2 (300.425487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-313926 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-313926 -n old-k8s-version-313926
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-313926 -n old-k8s-version-313926
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-685871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 18:38:28.104236  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-685871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (53.8898813s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-437083 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [869fac6a-5f1e-4484-8bcb-26eb8b314719] Pending
helpers_test.go:344: "busybox" [869fac6a-5f1e-4484-8bcb-26eb8b314719] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [869fac6a-5f1e-4484-8bcb-26eb8b314719] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003672622s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-437083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-437083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-437083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073488625s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-437083 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-437083 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-437083 --alsologtostderr -v=3: (12.100969883s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-685871 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [18b2c1c1-bc53-46b9-ae93-6607aa342ac8] Pending
helpers_test.go:344: "busybox" [18b2c1c1-bc53-46b9-ae93-6607aa342ac8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [18b2c1c1-bc53-46b9-ae93-6607aa342ac8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003637954s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-685871 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-437083 -n embed-certs-437083
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-437083 -n embed-certs-437083: exit status 7 (78.910243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-437083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-437083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-437083 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m25.758368751s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-437083 -n embed-certs-437083
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-685871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-685871 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.401337873s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-685871 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-685871 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-685871 --alsologtostderr -v=3: (12.32454615s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871: exit status 7 (72.394074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-685871 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-685871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 18:40:54.059539  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.066179  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.077677  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.099069  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.140680  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.222203  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.384234  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:54.705985  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:55.347511  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:56.629660  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:40:59.191510  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:04.313704  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:14.555678  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:35.037582  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:48.944973  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:48.951710  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:48.963172  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:48.984645  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:49.026009  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:49.107442  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:49.268945  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:49.590665  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:50.232329  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:51.514363  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:54.076165  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:41:59.198251  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:42:09.440023  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:42:15.999518  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:42:29.921906  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:42:46.668560  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:43:10.884154  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:43:11.176544  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-685871 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m36.172237924s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9vv7l" [0cf5d99a-76bb-443a-be79-f58d03c6fc09] Running
E0927 18:43:28.103629  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003809732s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9vv7l" [0cf5d99a-76bb-443a-be79-f58d03c6fc09] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004381271s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-437083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-437083 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-437083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-437083 -n embed-certs-437083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-437083 -n embed-certs-437083: exit status 2 (357.722838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-437083 -n embed-certs-437083
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-437083 -n embed-certs-437083: exit status 2 (398.442823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-437083 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-437083 --alsologtostderr -v=1: (1.040425938s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-437083 -n embed-certs-437083
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-437083 -n embed-certs-437083
E0927 18:43:37.925033  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-311695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-311695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (39.044628004s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-65pp5" [c1226529-11a1-4d88-aec9-ad319b15dd9c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006719528s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-65pp5" [c1226529-11a1-4d88-aec9-ad319b15dd9c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003486636s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-685871 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-685871 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-685871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-685871 --alsologtostderr -v=1: (1.283722526s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871: exit status 2 (462.532332ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871: exit status 2 (490.741858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-685871 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-685871 --alsologtostderr -v=1: (1.140171582s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685871 -n default-k8s-diff-port-685871
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m37.309827319s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-311695 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-311695 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-311695 --alsologtostderr -v=3: (1.269492328s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-311695 -n newest-cni-311695
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-311695 -n newest-cni-311695: exit status 7 (83.301471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-311695 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-311695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0927 18:44:32.805906  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-311695 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (23.692522456s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-311695 -n newest-cni-311695
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-311695 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-311695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-311695 -n newest-cni-311695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-311695 -n newest-cni-311695: exit status 2 (347.064052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-311695 -n newest-cni-311695
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-311695 -n newest-cni-311695: exit status 2 (392.168675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-311695 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-311695 --alsologtostderr -v=1: (1.031188832s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-311695 -n newest-cni-311695
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-311695 -n newest-cni-311695
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.81s)
E0927 18:50:11.639800  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.592251648s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6bsw8" [14ee6c48-b0e8-41f4-8b67-e67eed4090ae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00449662s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-988927 "pgrep -a kubelet"
I0927 18:45:48.777772  299395 config.go:182] Loaded profile config "auto-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2z2tp" [101a9bae-7596-4844-89d5-9f6d497ad0d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2z2tp" [101a9bae-7596-4844-89d5-9f6d497ad0d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.087402704s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-988927 "pgrep -a kubelet"
I0927 18:45:51.472720  299395 config.go:182] Loaded profile config "kindnet-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wx4q7" [14edb624-7b07-495f-8e0c-ccb2c0853cca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 18:45:54.059171  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/old-k8s-version-313926/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wx4q7" [14edb624-7b07-495f-8e0c-ccb2c0853cca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.012958325s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m16.8930201s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0927 18:46:48.945491  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:47:16.647175  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/no-preload-446590/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.057929421s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-988927 "pgrep -a kubelet"
E0927 18:47:29.739576  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
I0927 18:47:29.806581  299395 config.go:182] Loaded profile config "custom-flannel-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ktfgt" [ba98122f-e105-45b4-a0d8-348b44173b26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ktfgt" [ba98122f-e105-45b4-a0d8-348b44173b26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00460296s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2754t" [db5c9f54-bc63-4ad7-9d2b-6c3e35c87e7f] Running
E0927 18:47:46.668902  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/functional-427306/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004530537s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-988927 "pgrep -a kubelet"
I0927 18:47:49.535909  299395 config.go:182] Loaded profile config "calico-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-925fx" [dc511206-0fa0-40da-b9b4-5c098564350d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-925fx" [dc511206-0fa0-40da-b9b4-5c098564350d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00377474s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.146884614s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0927 18:48:49.702984  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:49.709320  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:49.720678  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:49.742069  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:49.783483  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:49.864894  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:50.026571  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:50.348559  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:50.990249  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:52.271555  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:54.832863  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:48:59.954449  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:49:10.196627  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.291218275s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-988927 "pgrep -a kubelet"
I0927 18:49:21.460790  299395 config.go:182] Loaded profile config "enable-default-cni-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-js7tv" [d4a73c49-2bab-4c82-a12a-06dc0852e298] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-js7tv" [d4a73c49-2bab-4c82-a12a-06dc0852e298] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004463054s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pj4h9" [02c1c46d-261c-40bc-9da9-ff9db9527408] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004365306s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-988927 "pgrep -a kubelet"
I0927 18:49:29.900885  299395 config.go:182] Loaded profile config "flannel-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r7khs" [cdbe29e3-0981-45b1-90a1-bb36479d9da8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 18:49:30.678336  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/default-k8s-diff-port-685871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-r7khs" [cdbe29e3-0981-45b1-90a1-bb36479d9da8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004277654s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (45.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-988927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (45.015634238s)
--- PASS: TestNetworkPlugins/group/bridge/Start (45.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-988927 "pgrep -a kubelet"
I0927 18:50:41.364894  299395 config.go:182] Loaded profile config "bridge-988927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-988927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fsspf" [df2d402e-402f-4524-aaff-00f6978a6a9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 18:50:45.163373  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.170391  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.182049  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.207113  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.262634  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.344460  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:45.506157  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fsspf" [df2d402e-402f-4524-aaff-00f6978a6a9f] Running
E0927 18:50:45.828368  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:46.470434  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:47.751890  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.037072  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.043571  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.055052  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.076531  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.118136  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.199638  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.361195  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:49.683003  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:50.313463  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/kindnet-988927/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:50:50.324921  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/auto-988927/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004193807s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-988927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-988927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-268027 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-268027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-268027
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-140896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-140896
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0927 18:26:31.174820  299395 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/addons-583947/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: kubenet-988927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-454178
contexts:
- context:
cluster: pause-454178
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-454178
name: pause-454178
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-454178
user:
client-certificate: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.crt
client-key: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-988927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-988927"

                                                
                                                
----------------------- debugLogs end: kubenet-988927 [took: 3.346251663s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-988927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-988927
--- SKIP: TestNetworkPlugins/group/kubenet (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-988927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-988927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19712-294006/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-454178
contexts:
- context:
cluster: pause-454178
extensions:
- extension:
last-update: Fri, 27 Sep 2024 18:25:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-454178
name: pause-454178
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-454178
user:
client-certificate: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.crt
client-key: /home/jenkins/minikube-integration/19712-294006/.minikube/profiles/pause-454178/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-988927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-988927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-988927"

                                                
                                                
----------------------- debugLogs end: cilium-988927 [took: 4.998279772s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-988927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-988927
--- SKIP: TestNetworkPlugins/group/cilium (5.16s)

                                                
                                    
Copied to clipboard