Test Report: Docker_Linux_containerd_arm64 19749

                    
                      50b5d8ee62174b462904730e907edeaa222f14db:2024-10-11:36607
                    
                

Test fail (2/329)

Order failed test Duration
29 TestAddons/serial/Volcano 211.13
303 TestStartStop/group/old-k8s-version/serial/SecondStart 386
x
+
TestAddons/serial/Volcano (211.13s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:811: volcano-admission stabilized in 49.987502ms
addons_test.go:819: volcano-controller stabilized in 50.074246ms
addons_test.go:803: volcano-scheduler stabilized in 50.118512ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-w9lfw" [94fa1575-5ccb-46d1-aec1-e4a2983ff3c3] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003465696s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-lx7gl" [72131c47-a932-4fa4-a13b-6ddc10c12598] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003165779s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-6wjzc" [0cac67ba-d70b-4be7-a0ea-1a15752d197b] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004107085s
addons_test.go:838: (dbg) Run:  kubectl --context addons-652898 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-652898 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-652898 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a26bf0d6-fa45-404a-86a6-60ccc1662949] Pending
helpers_test.go:344: "test-job-nginx-0" [a26bf0d6-fa45-404a-86a6-60ccc1662949] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-652898 -n addons-652898
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-11 21:03:53.830872232 +0000 UTC m=+365.605784847
addons_test.go:870: (dbg) Run:  kubectl --context addons-652898 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-652898 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-285ee4e6-4cbb-44d1-93f3-02805f7ede83
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4l5h (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-f4l5h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-652898 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-652898 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-652898
helpers_test.go:235: (dbg) docker inspect addons-652898:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6",
	        "Created": "2024-10-11T20:58:29.910673034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 877105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-11T20:58:30.142405672Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6/hosts",
	        "LogPath": "/var/lib/docker/containers/49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6/49084c68d1357cefe2b09fa5045c3714c84a3302458839482db0487d309251c6-json.log",
	        "Name": "/addons-652898",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-652898:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-652898",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/978f750727faa3a50865dc9a9d064440dbae81d9851ebb8f37590cd41f71798a-init/diff:/var/lib/docker/overlay2/64a038944358d2428e67305d9f97679b9a377ef43ac638d6a777391fae594f13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/978f750727faa3a50865dc9a9d064440dbae81d9851ebb8f37590cd41f71798a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/978f750727faa3a50865dc9a9d064440dbae81d9851ebb8f37590cd41f71798a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/978f750727faa3a50865dc9a9d064440dbae81d9851ebb8f37590cd41f71798a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-652898",
	                "Source": "/var/lib/docker/volumes/addons-652898/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-652898",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-652898",
	                "name.minikube.sigs.k8s.io": "addons-652898",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "26cab432b6405d2d459381355874fbfaa840d422d99e730f1b3dc3f90c3f2794",
	            "SandboxKey": "/var/run/docker/netns/26cab432b640",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-652898": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "526f2162ea50721c2e8f66d0ef7f40cb9f1abe6d36dce7f8419505a91b2ce5b1",
	                    "EndpointID": "8d13a6dd562086887a9a21c6f93345fc129fc18ce85d3127a56e5d259e366b37",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-652898",
	                        "49084c68d135"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-652898 -n addons-652898
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 logs -n 25: (1.749156329s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-347006   | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-347006              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| delete  | -p download-only-347006              | download-only-347006   | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| start   | -o=json --download-only              | download-only-287405   | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-287405              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-287405              | download-only-287405   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-347006              | download-only-347006   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| delete  | -p download-only-287405              | download-only-287405   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                   | download-docker-547143 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | download-docker-547143               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-547143            | download-docker-547143 | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| start   | --download-only -p                   | binary-mirror-775051   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | binary-mirror-775051                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38901               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-775051              | binary-mirror-775051   | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 20:58 UTC |
	| addons  | enable dashboard -p                  | addons-652898          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-652898                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-652898          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC |                     |
	|         | addons-652898                        |                        |         |         |                     |                     |
	| start   | -p addons-652898 --wait=true         | addons-652898          | jenkins | v1.34.0 | 11 Oct 24 20:58 UTC | 11 Oct 24 21:00 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:58:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:58:04.954514  876623 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:58:04.954717  876623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:04.954743  876623 out.go:358] Setting ErrFile to fd 2...
	I1011 20:58:04.954763  876623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:58:04.955164  876623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 20:58:04.955762  876623 out.go:352] Setting JSON to false
	I1011 20:58:04.957006  876623 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16832,"bootTime":1728663453,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 20:58:04.957107  876623 start.go:139] virtualization:  
	I1011 20:58:04.959671  876623 out.go:177] * [addons-652898] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 20:58:04.961779  876623 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 20:58:04.961918  876623 notify.go:220] Checking for updates...
	I1011 20:58:04.965703  876623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:58:04.967701  876623 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 20:58:04.969553  876623 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 20:58:04.971327  876623 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 20:58:04.972940  876623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 20:58:04.974970  876623 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:58:04.999890  876623 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:58:05.000026  876623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:05.059405  876623 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:05.04956744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:05.059514  876623 docker.go:318] overlay module found
	I1011 20:58:05.061710  876623 out.go:177] * Using the docker driver based on user configuration
	I1011 20:58:05.063695  876623 start.go:297] selected driver: docker
	I1011 20:58:05.063735  876623 start.go:901] validating driver "docker" against <nil>
	I1011 20:58:05.063751  876623 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 20:58:05.064435  876623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:58:05.125649  876623 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-11 20:58:05.111593296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:58:05.125853  876623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:58:05.126084  876623 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:58:05.128016  876623 out.go:177] * Using Docker driver with root privileges
	I1011 20:58:05.129832  876623 cni.go:84] Creating CNI manager for ""
	I1011 20:58:05.129902  876623 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 20:58:05.129917  876623 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:58:05.130017  876623 start.go:340] cluster config:
	{Name:addons-652898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-652898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:05.132246  876623 out.go:177] * Starting "addons-652898" primary control-plane node in "addons-652898" cluster
	I1011 20:58:05.134085  876623 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1011 20:58:05.136292  876623 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:58:05.138110  876623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 20:58:05.138164  876623 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1011 20:58:05.138175  876623 cache.go:56] Caching tarball of preloaded images
	I1011 20:58:05.138198  876623 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:58:05.138260  876623 preload.go:172] Found /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 20:58:05.138286  876623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1011 20:58:05.138642  876623 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/config.json ...
	I1011 20:58:05.138675  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/config.json: {Name:mkee9dd1113f3a4f3ae3848e04e26121eafbe5ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:05.154031  876623 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:58:05.154153  876623 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:58:05.154174  876623 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1011 20:58:05.154178  876623 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1011 20:58:05.154186  876623 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1011 20:58:05.154191  876623 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1011 20:58:22.480264  876623 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1011 20:58:22.480314  876623 cache.go:194] Successfully downloaded all kic artifacts
	I1011 20:58:22.480359  876623 start.go:360] acquireMachinesLock for addons-652898: {Name:mk222cf258811b88e68ba9d4cadb0bb5dc04583b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 20:58:22.480493  876623 start.go:364] duration metric: took 115.815µs to acquireMachinesLock for "addons-652898"
	I1011 20:58:22.480521  876623 start.go:93] Provisioning new machine with config: &{Name:addons-652898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-652898 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1011 20:58:22.480591  876623 start.go:125] createHost starting for "" (driver="docker")
	I1011 20:58:22.482937  876623 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1011 20:58:22.483189  876623 start.go:159] libmachine.API.Create for "addons-652898" (driver="docker")
	I1011 20:58:22.483223  876623 client.go:168] LocalClient.Create starting
	I1011 20:58:22.483332  876623 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem
	I1011 20:58:22.976313  876623 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem
	I1011 20:58:23.595674  876623 cli_runner.go:164] Run: docker network inspect addons-652898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 20:58:23.609232  876623 cli_runner.go:211] docker network inspect addons-652898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 20:58:23.609320  876623 network_create.go:284] running [docker network inspect addons-652898] to gather additional debugging logs...
	I1011 20:58:23.609341  876623 cli_runner.go:164] Run: docker network inspect addons-652898
	W1011 20:58:23.622755  876623 cli_runner.go:211] docker network inspect addons-652898 returned with exit code 1
	I1011 20:58:23.622787  876623 network_create.go:287] error running [docker network inspect addons-652898]: docker network inspect addons-652898: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-652898 not found
	I1011 20:58:23.622802  876623 network_create.go:289] output of [docker network inspect addons-652898]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-652898 not found
	
	** /stderr **
	I1011 20:58:23.622907  876623 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:23.638187  876623 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001fde920}
	I1011 20:58:23.638229  876623 network_create.go:124] attempt to create docker network addons-652898 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1011 20:58:23.638343  876623 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-652898 addons-652898
	I1011 20:58:23.709013  876623 network_create.go:108] docker network addons-652898 192.168.49.0/24 created
	I1011 20:58:23.709044  876623 kic.go:121] calculated static IP "192.168.49.2" for the "addons-652898" container
	I1011 20:58:23.709116  876623 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 20:58:23.724906  876623 cli_runner.go:164] Run: docker volume create addons-652898 --label name.minikube.sigs.k8s.io=addons-652898 --label created_by.minikube.sigs.k8s.io=true
	I1011 20:58:23.741519  876623 oci.go:103] Successfully created a docker volume addons-652898
	I1011 20:58:23.741618  876623 cli_runner.go:164] Run: docker run --rm --name addons-652898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652898 --entrypoint /usr/bin/test -v addons-652898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1011 20:58:25.774346  876623 cli_runner.go:217] Completed: docker run --rm --name addons-652898-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652898 --entrypoint /usr/bin/test -v addons-652898:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.032684439s)
	I1011 20:58:25.774377  876623 oci.go:107] Successfully prepared a docker volume addons-652898
	I1011 20:58:25.774413  876623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 20:58:25.774433  876623 kic.go:194] Starting extracting preloaded images to volume ...
	I1011 20:58:25.774503  876623 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-652898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 20:58:29.841031  876623 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-652898:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.066486899s)
	I1011 20:58:29.841069  876623 kic.go:203] duration metric: took 4.066631924s to extract preloaded images to volume ...
	W1011 20:58:29.841236  876623 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1011 20:58:29.841365  876623 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 20:58:29.896385  876623 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-652898 --name addons-652898 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652898 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-652898 --network addons-652898 --ip 192.168.49.2 --volume addons-652898:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1011 20:58:30.336016  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Running}}
	I1011 20:58:30.359950  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:30.386184  876623 cli_runner.go:164] Run: docker exec addons-652898 stat /var/lib/dpkg/alternatives/iptables
	I1011 20:58:30.457338  876623 oci.go:144] the created container "addons-652898" has a running status.
	I1011 20:58:30.457366  876623 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa...
	I1011 20:58:30.751181  876623 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 20:58:30.773838  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:30.808858  876623 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 20:58:30.808877  876623 kic_runner.go:114] Args: [docker exec --privileged addons-652898 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 20:58:30.894188  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:30.927792  876623 machine.go:93] provisionDockerMachine start ...
	I1011 20:58:30.927883  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:30.956997  876623 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:30.957258  876623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1011 20:58:30.957267  876623 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 20:58:31.110216  876623 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-652898
	
	I1011 20:58:31.110341  876623 ubuntu.go:169] provisioning hostname "addons-652898"
	I1011 20:58:31.110453  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:31.134470  876623 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:31.134730  876623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1011 20:58:31.134743  876623 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-652898 && echo "addons-652898" | sudo tee /etc/hostname
	I1011 20:58:31.316008  876623 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-652898
	
	I1011 20:58:31.316114  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:31.334234  876623 main.go:141] libmachine: Using SSH client type: native
	I1011 20:58:31.334623  876623 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1011 20:58:31.334681  876623 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-652898' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-652898/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-652898' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 20:58:31.474767  876623 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 20:58:31.474803  876623 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19749-870468/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-870468/.minikube}
	I1011 20:58:31.474835  876623 ubuntu.go:177] setting up certificates
	I1011 20:58:31.474845  876623 provision.go:84] configureAuth start
	I1011 20:58:31.474935  876623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652898
	I1011 20:58:31.492765  876623 provision.go:143] copyHostCerts
	I1011 20:58:31.492857  876623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem (1078 bytes)
	I1011 20:58:31.492993  876623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem (1123 bytes)
	I1011 20:58:31.493058  876623 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem (1675 bytes)
	I1011 20:58:31.493113  876623 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem org=jenkins.addons-652898 san=[127.0.0.1 192.168.49.2 addons-652898 localhost minikube]
	I1011 20:58:31.994085  876623 provision.go:177] copyRemoteCerts
	I1011 20:58:31.994152  876623 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 20:58:31.994194  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:32.013873  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:32.108323  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 20:58:32.134652  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1011 20:58:32.160469  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1011 20:58:32.185397  876623 provision.go:87] duration metric: took 710.534105ms to configureAuth
	I1011 20:58:32.185427  876623 ubuntu.go:193] setting minikube options for container-runtime
	I1011 20:58:32.185617  876623 config.go:182] Loaded profile config "addons-652898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 20:58:32.185624  876623 machine.go:96] duration metric: took 1.257815146s to provisionDockerMachine
	I1011 20:58:32.185630  876623 client.go:171] duration metric: took 9.702398701s to LocalClient.Create
	I1011 20:58:32.185645  876623 start.go:167] duration metric: took 9.702460272s to libmachine.API.Create "addons-652898"
	I1011 20:58:32.185652  876623 start.go:293] postStartSetup for "addons-652898" (driver="docker")
	I1011 20:58:32.185662  876623 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 20:58:32.185724  876623 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 20:58:32.185771  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:32.202303  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:32.299422  876623 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 20:58:32.302670  876623 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 20:58:32.302714  876623 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 20:58:32.302728  876623 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 20:58:32.302743  876623 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1011 20:58:32.302758  876623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/addons for local assets ...
	I1011 20:58:32.302828  876623 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/files for local assets ...
	I1011 20:58:32.302854  876623 start.go:296] duration metric: took 117.195498ms for postStartSetup
	I1011 20:58:32.303174  876623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652898
	I1011 20:58:32.319633  876623 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/config.json ...
	I1011 20:58:32.319935  876623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 20:58:32.319994  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:32.336072  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:32.427519  876623 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 20:58:32.432962  876623 start.go:128] duration metric: took 9.952353609s to createHost
	I1011 20:58:32.432988  876623 start.go:83] releasing machines lock for "addons-652898", held for 9.952484685s
	I1011 20:58:32.433071  876623 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652898
	I1011 20:58:32.451805  876623 ssh_runner.go:195] Run: cat /version.json
	I1011 20:58:32.451864  876623 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 20:58:32.451867  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:32.451924  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:32.475930  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:32.480054  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:32.710482  876623 ssh_runner.go:195] Run: systemctl --version
	I1011 20:58:32.715031  876623 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 20:58:32.719053  876623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1011 20:58:32.742887  876623 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1011 20:58:32.742974  876623 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 20:58:32.771160  876623 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1011 20:58:32.771183  876623 start.go:495] detecting cgroup driver to use...
	I1011 20:58:32.771217  876623 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1011 20:58:32.771268  876623 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1011 20:58:32.784134  876623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 20:58:32.795505  876623 docker.go:217] disabling cri-docker service (if available) ...
	I1011 20:58:32.795590  876623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 20:58:32.809094  876623 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 20:58:32.823491  876623 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 20:58:32.915037  876623 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 20:58:33.014253  876623 docker.go:233] disabling docker service ...
	I1011 20:58:33.014368  876623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 20:58:33.035739  876623 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 20:58:33.049395  876623 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 20:58:33.138309  876623 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 20:58:33.227784  876623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 20:58:33.239254  876623 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 20:58:33.254790  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1011 20:58:33.264860  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 20:58:33.275444  876623 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 20:58:33.275538  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 20:58:33.285885  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 20:58:33.296197  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 20:58:33.306020  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 20:58:33.316717  876623 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 20:58:33.326417  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 20:58:33.336636  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1011 20:58:33.346644  876623 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1011 20:58:33.356668  876623 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 20:58:33.365891  876623 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 20:58:33.374578  876623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:33.462106  876623 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 20:58:33.594095  876623 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1011 20:58:33.594216  876623 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1011 20:58:33.597894  876623 start.go:563] Will wait 60s for crictl version
	I1011 20:58:33.597985  876623 ssh_runner.go:195] Run: which crictl
	I1011 20:58:33.601389  876623 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 20:58:33.639534  876623 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1011 20:58:33.639680  876623 ssh_runner.go:195] Run: containerd --version
	I1011 20:58:33.662471  876623 ssh_runner.go:195] Run: containerd --version
	I1011 20:58:33.689975  876623 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1011 20:58:33.693021  876623 cli_runner.go:164] Run: docker network inspect addons-652898 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 20:58:33.708512  876623 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1011 20:58:33.712082  876623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:33.723092  876623 kubeadm.go:883] updating cluster {Name:addons-652898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-652898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 20:58:33.723212  876623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 20:58:33.723276  876623 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:33.761680  876623 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 20:58:33.761702  876623 containerd.go:534] Images already preloaded, skipping extraction
	I1011 20:58:33.761761  876623 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 20:58:33.799622  876623 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 20:58:33.799647  876623 cache_images.go:84] Images are preloaded, skipping loading
	I1011 20:58:33.799656  876623 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1011 20:58:33.799750  876623 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-652898 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-652898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 20:58:33.799817  876623 ssh_runner.go:195] Run: sudo crictl info
	I1011 20:58:33.835923  876623 cni.go:84] Creating CNI manager for ""
	I1011 20:58:33.835945  876623 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 20:58:33.835955  876623 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 20:58:33.835986  876623 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-652898 NodeName:addons-652898 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 20:58:33.836119  876623 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-652898"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 20:58:33.836185  876623 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 20:58:33.844700  876623 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 20:58:33.844770  876623 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 20:58:33.853758  876623 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1011 20:58:33.872522  876623 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 20:58:33.890624  876623 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1011 20:58:33.909215  876623 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1011 20:58:33.912551  876623 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 20:58:33.923489  876623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:34.014346  876623 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:34.034897  876623 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898 for IP: 192.168.49.2
	I1011 20:58:34.034922  876623 certs.go:194] generating shared ca certs ...
	I1011 20:58:34.034961  876623 certs.go:226] acquiring lock for ca certs: {Name:mk314562fa38b26f30da8f33a861c5cef3708653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.035875  876623 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key
	I1011 20:58:34.404981  876623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt ...
	I1011 20:58:34.405013  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt: {Name:mk442dee21d5540c27682ecefc41d3a13a5ac983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.405251  876623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key ...
	I1011 20:58:34.405267  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key: {Name:mk151fe5491e57bbfb40fe0b36090c156e561db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.405373  876623 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key
	I1011 20:58:34.593294  876623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.crt ...
	I1011 20:58:34.593325  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.crt: {Name:mk45d59223d1de7789f054a190d2446e07835981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.594299  876623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key ...
	I1011 20:58:34.594324  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key: {Name:mkcabd69a65702890b15647c10039c62d2f1b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:34.594423  876623 certs.go:256] generating profile certs ...
	I1011 20:58:34.594492  876623 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.key
	I1011 20:58:34.594518  876623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt with IP's: []
	I1011 20:58:35.101239  876623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt ...
	I1011 20:58:35.101277  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: {Name:mk8f73c99898255efa45d75c51d7e0fcad947e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.102090  876623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.key ...
	I1011 20:58:35.102112  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.key: {Name:mk00f1bb5b2c0028633610a9e65fa3fc9212a78d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.102817  876623 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key.ef0172fd
	I1011 20:58:35.102852  876623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt.ef0172fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1011 20:58:35.718577  876623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt.ef0172fd ...
	I1011 20:58:35.718615  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt.ef0172fd: {Name:mk1ea873eee71323297e421ed21bc2c80cd91321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.718812  876623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key.ef0172fd ...
	I1011 20:58:35.718831  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key.ef0172fd: {Name:mk4b3ed5e371ef195b3a5db5de09f5aa388036d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:35.718928  876623 certs.go:381] copying /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt.ef0172fd -> /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt
	I1011 20:58:35.719006  876623 certs.go:385] copying /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key.ef0172fd -> /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key
	I1011 20:58:35.719063  876623 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.key
	I1011 20:58:35.719084  876623 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.crt with IP's: []
	I1011 20:58:36.236628  876623 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.crt ...
	I1011 20:58:36.236661  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.crt: {Name:mke57ff2616a75a1fa99a60dca84d84ba7729d88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:36.237486  876623 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.key ...
	I1011 20:58:36.237504  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.key: {Name:mk711cbb90b9f8903c8c82a406a8d99bd080abc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:36.238051  876623 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 20:58:36.238095  876623 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem (1078 bytes)
	I1011 20:58:36.238127  876623 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem (1123 bytes)
	I1011 20:58:36.238156  876623 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem (1675 bytes)
	I1011 20:58:36.238839  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 20:58:36.263143  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 20:58:36.286697  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 20:58:36.309949  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 20:58:36.333454  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1011 20:58:36.357997  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 20:58:36.383895  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 20:58:36.409416  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 20:58:36.434199  876623 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 20:58:36.458182  876623 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 20:58:36.476415  876623 ssh_runner.go:195] Run: openssl version
	I1011 20:58:36.481555  876623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 20:58:36.490833  876623 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:36.494074  876623 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:36.494133  876623 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 20:58:36.500804  876623 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 20:58:36.510354  876623 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 20:58:36.513609  876623 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 20:58:36.513662  876623 kubeadm.go:392] StartCluster: {Name:addons-652898 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-652898 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:58:36.513745  876623 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1011 20:58:36.513801  876623 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 20:58:36.550340  876623 cri.go:89] found id: ""
	I1011 20:58:36.550410  876623 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 20:58:36.559267  876623 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 20:58:36.568045  876623 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1011 20:58:36.568135  876623 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 20:58:36.577331  876623 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 20:58:36.577353  876623 kubeadm.go:157] found existing configuration files:
	
	I1011 20:58:36.577405  876623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 20:58:36.585996  876623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 20:58:36.586080  876623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 20:58:36.594820  876623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 20:58:36.603407  876623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 20:58:36.603516  876623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 20:58:36.611939  876623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 20:58:36.620429  876623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 20:58:36.620506  876623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 20:58:36.629034  876623 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 20:58:36.637730  876623 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 20:58:36.637801  876623 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 20:58:36.646139  876623 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 20:58:36.685721  876623 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 20:58:36.685797  876623 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 20:58:36.703061  876623 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1011 20:58:36.703157  876623 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1011 20:58:36.703210  876623 kubeadm.go:310] OS: Linux
	I1011 20:58:36.703273  876623 kubeadm.go:310] CGROUPS_CPU: enabled
	I1011 20:58:36.703336  876623 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1011 20:58:36.703399  876623 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1011 20:58:36.703461  876623 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1011 20:58:36.703516  876623 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1011 20:58:36.703587  876623 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1011 20:58:36.703648  876623 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1011 20:58:36.703708  876623 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1011 20:58:36.703772  876623 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1011 20:58:36.766987  876623 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 20:58:36.767101  876623 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 20:58:36.767197  876623 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 20:58:36.773084  876623 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 20:58:36.776795  876623 out.go:235]   - Generating certificates and keys ...
	I1011 20:58:36.776893  876623 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 20:58:36.776968  876623 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 20:58:37.597608  876623 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 20:58:38.252615  876623 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 20:58:38.440023  876623 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 20:58:39.493166  876623 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 20:58:39.890495  876623 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 20:58:39.890829  876623 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-652898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:40.830927  876623 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 20:58:40.831289  876623 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-652898 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1011 20:58:41.549447  876623 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 20:58:41.688736  876623 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 20:58:42.320368  876623 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 20:58:42.321753  876623 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 20:58:42.497912  876623 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 20:58:42.762743  876623 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 20:58:43.220467  876623 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 20:58:43.950545  876623 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 20:58:44.244273  876623 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 20:58:44.244888  876623 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 20:58:44.247989  876623 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 20:58:44.250514  876623 out.go:235]   - Booting up control plane ...
	I1011 20:58:44.250616  876623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 20:58:44.250697  876623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 20:58:44.251577  876623 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 20:58:44.262382  876623 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 20:58:44.268625  876623 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 20:58:44.268685  876623 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 20:58:44.374995  876623 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 20:58:44.375114  876623 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 20:58:45.876284  876623 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501355711s
	I1011 20:58:45.876379  876623 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 20:58:51.878041  876623 kubeadm.go:310] [api-check] The API server is healthy after 6.001790119s
	I1011 20:58:51.904110  876623 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1011 20:58:51.924171  876623 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1011 20:58:51.946911  876623 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1011 20:58:51.947107  876623 kubeadm.go:310] [mark-control-plane] Marking the node addons-652898 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1011 20:58:51.957247  876623 kubeadm.go:310] [bootstrap-token] Using token: ceofi6.8hnqacxw0t5fupy6
	I1011 20:58:51.960617  876623 out.go:235]   - Configuring RBAC rules ...
	I1011 20:58:51.960745  876623 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1011 20:58:51.965396  876623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1011 20:58:51.973435  876623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1011 20:58:51.977112  876623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1011 20:58:51.981041  876623 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1011 20:58:51.985054  876623 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1011 20:58:52.286721  876623 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1011 20:58:52.711995  876623 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1011 20:58:53.286638  876623 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1011 20:58:53.287816  876623 kubeadm.go:310] 
	I1011 20:58:53.287896  876623 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1011 20:58:53.287910  876623 kubeadm.go:310] 
	I1011 20:58:53.287994  876623 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1011 20:58:53.288004  876623 kubeadm.go:310] 
	I1011 20:58:53.288030  876623 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1011 20:58:53.288098  876623 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1011 20:58:53.288153  876623 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1011 20:58:53.288161  876623 kubeadm.go:310] 
	I1011 20:58:53.288214  876623 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1011 20:58:53.288222  876623 kubeadm.go:310] 
	I1011 20:58:53.288269  876623 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1011 20:58:53.288279  876623 kubeadm.go:310] 
	I1011 20:58:53.288330  876623 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1011 20:58:53.288409  876623 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1011 20:58:53.288480  876623 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1011 20:58:53.288487  876623 kubeadm.go:310] 
	I1011 20:58:53.288571  876623 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1011 20:58:53.288650  876623 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1011 20:58:53.288662  876623 kubeadm.go:310] 
	I1011 20:58:53.288746  876623 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ceofi6.8hnqacxw0t5fupy6 \
	I1011 20:58:53.288852  876623 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:867a6385a8ec684c045d5c1dfbff99515bbb0cd75aed423360eead7a61d7346c \
	I1011 20:58:53.288878  876623 kubeadm.go:310] 	--control-plane 
	I1011 20:58:53.288885  876623 kubeadm.go:310] 
	I1011 20:58:53.288969  876623 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1011 20:58:53.288979  876623 kubeadm.go:310] 
	I1011 20:58:53.289060  876623 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ceofi6.8hnqacxw0t5fupy6 \
	I1011 20:58:53.289165  876623 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:867a6385a8ec684c045d5c1dfbff99515bbb0cd75aed423360eead7a61d7346c 
	I1011 20:58:53.293452  876623 kubeadm.go:310] W1011 20:58:36.681587    1032 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:53.293768  876623 kubeadm.go:310] W1011 20:58:36.683267    1032 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1011 20:58:53.293988  876623 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1011 20:58:53.294099  876623 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1011 20:58:53.294120  876623 cni.go:84] Creating CNI manager for ""
	I1011 20:58:53.294131  876623 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 20:58:53.296248  876623 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1011 20:58:53.298365  876623 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1011 20:58:53.302114  876623 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1011 20:58:53.302133  876623 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1011 20:58:53.321940  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1011 20:58:53.624819  876623 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1011 20:58:53.624949  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:53.625028  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-652898 minikube.k8s.io/updated_at=2024_10_11T20_58_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd minikube.k8s.io/name=addons-652898 minikube.k8s.io/primary=true
	I1011 20:58:53.813632  876623 ops.go:34] apiserver oom_adj: -16
	I1011 20:58:53.813753  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.313950  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:54.814016  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.313892  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:55.814412  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:56.314393  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:56.814513  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:57.313907  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:57.814794  876623 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1011 20:58:57.933525  876623 kubeadm.go:1113] duration metric: took 4.308616451s to wait for elevateKubeSystemPrivileges
	I1011 20:58:57.933554  876623 kubeadm.go:394] duration metric: took 21.419899197s to StartCluster
	I1011 20:58:57.933572  876623 settings.go:142] acquiring lock: {Name:mk7b73c41886578ea1058c5600a9d67189a81ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:57.933702  876623 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 20:58:57.934076  876623 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/kubeconfig: {Name:mk3426a24f1490293c678ab8b1b76454f1a9ac37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 20:58:57.934366  876623 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1011 20:58:57.934510  876623 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1011 20:58:57.934748  876623 config.go:182] Loaded profile config "addons-652898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 20:58:57.934790  876623 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1011 20:58:57.934879  876623 addons.go:69] Setting yakd=true in profile "addons-652898"
	I1011 20:58:57.934898  876623 addons.go:234] Setting addon yakd=true in "addons-652898"
	I1011 20:58:57.934922  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.935454  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.935835  876623 addons.go:69] Setting metrics-server=true in profile "addons-652898"
	I1011 20:58:57.935854  876623 addons.go:234] Setting addon metrics-server=true in "addons-652898"
	I1011 20:58:57.935879  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.936309  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.938251  876623 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-652898"
	I1011 20:58:57.938537  876623 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-652898"
	I1011 20:58:57.938574  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.938999  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.940732  876623 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-652898"
	I1011 20:58:57.940772  876623 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-652898"
	I1011 20:58:57.940828  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.938444  876623 addons.go:69] Setting registry=true in profile "addons-652898"
	I1011 20:58:57.938459  876623 addons.go:69] Setting storage-provisioner=true in profile "addons-652898"
	I1011 20:58:57.938468  876623 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-652898"
	I1011 20:58:57.938475  876623 addons.go:69] Setting volcano=true in profile "addons-652898"
	I1011 20:58:57.938481  876623 addons.go:69] Setting volumesnapshots=true in profile "addons-652898"
	I1011 20:58:57.938522  876623 out.go:177] * Verifying Kubernetes components...
	I1011 20:58:57.941076  876623 addons.go:69] Setting cloud-spanner=true in profile "addons-652898"
	I1011 20:58:57.941114  876623 addons.go:234] Setting addon cloud-spanner=true in "addons-652898"
	I1011 20:58:57.941168  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.941796  876623 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-652898"
	I1011 20:58:57.941891  876623 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-652898"
	I1011 20:58:57.941949  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.942710  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.948261  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.951083  876623 addons.go:69] Setting default-storageclass=true in profile "addons-652898"
	I1011 20:58:57.951112  876623 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-652898"
	I1011 20:58:57.951454  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.958409  876623 addons.go:234] Setting addon registry=true in "addons-652898"
	I1011 20:58:57.958468  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.961615  876623 addons.go:234] Setting addon storage-provisioner=true in "addons-652898"
	I1011 20:58:57.961665  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.962136  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.962543  876623 addons.go:69] Setting gcp-auth=true in profile "addons-652898"
	I1011 20:58:57.962570  876623 mustload.go:65] Loading cluster: addons-652898
	I1011 20:58:57.962756  876623 config.go:182] Loaded profile config "addons-652898": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 20:58:57.962987  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.976712  876623 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-652898"
	I1011 20:58:57.977080  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.994383  876623 addons.go:234] Setting addon volcano=true in "addons-652898"
	I1011 20:58:57.994448  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.994939  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:57.995429  876623 addons.go:69] Setting ingress=true in profile "addons-652898"
	I1011 20:58:57.995454  876623 addons.go:234] Setting addon ingress=true in "addons-652898"
	I1011 20:58:57.995493  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:57.995897  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.011811  876623 addons.go:234] Setting addon volumesnapshots=true in "addons-652898"
	I1011 20:58:58.011872  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.012395  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.020410  876623 addons.go:69] Setting ingress-dns=true in profile "addons-652898"
	I1011 20:58:58.020441  876623 addons.go:234] Setting addon ingress-dns=true in "addons-652898"
	I1011 20:58:58.020489  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.020981  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.046371  876623 addons.go:69] Setting inspektor-gadget=true in profile "addons-652898"
	I1011 20:58:58.046413  876623 addons.go:234] Setting addon inspektor-gadget=true in "addons-652898"
	I1011 20:58:58.046452  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.046948  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.050202  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.072284  876623 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 20:58:58.078725  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.136503  876623 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1011 20:58:58.169147  876623 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:58.169230  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1011 20:58:58.169345  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.166595  876623 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1011 20:58:58.166605  876623 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 20:58:58.166608  876623 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1011 20:58:58.199068  876623 addons.go:234] Setting addon default-storageclass=true in "addons-652898"
	I1011 20:58:58.201047  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.201488  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.202948  876623 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1011 20:58:58.203101  876623 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:58.203131  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 20:58:58.203229  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.217293  876623 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1011 20:58:58.219433  876623 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1011 20:58:58.219468  876623 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1011 20:58:58.219547  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.197118  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.199409  876623 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 20:58:58.228122  876623 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 20:58:58.228197  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.238668  876623 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1011 20:58:58.238685  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1011 20:58:58.240025  876623 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1011 20:58:58.240282  876623 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:58.241502  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1011 20:58:58.241577  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.244628  876623 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:58.247395  876623 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:58:58.247503  876623 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1011 20:58:58.250500  876623 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:58.250523  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1011 20:58:58.250584  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.258384  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1011 20:58:58.262828  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1011 20:58:58.263281  876623 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:58.263298  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1011 20:58:58.263361  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.269411  876623 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1011 20:58:58.269579  876623 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1011 20:58:58.269594  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1011 20:58:58.306457  876623 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1011 20:58:58.306563  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.326563  876623 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:58.326585  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1011 20:58:58.326654  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.346396  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1011 20:58:58.351069  876623 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1011 20:58:58.351097  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1011 20:58:58.351166  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.365837  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1011 20:58:58.365943  876623 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1011 20:58:58.366658  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.367373  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.368151  876623 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1011 20:58:58.369912  876623 out.go:177]   - Using image docker.io/registry:2.8.3
	I1011 20:58:58.369960  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1011 20:58:58.371635  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1011 20:58:58.371750  876623 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1011 20:58:58.371760  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1011 20:58:58.371820  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.379810  876623 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-652898"
	I1011 20:58:58.379861  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:58:58.380288  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:58:58.383746  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.384447  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.386455  876623 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:58.386473  876623 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 20:58:58.386529  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.403210  876623 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1011 20:58:58.403278  876623 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1011 20:58:58.403355  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.407905  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.408532  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1011 20:58:58.410066  876623 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1011 20:58:58.416024  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1011 20:58:58.416052  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1011 20:58:58.416129  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.444824  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.466643  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.523096  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.525517  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.531856  876623 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1011 20:58:58.533426  876623 out.go:177]   - Using image docker.io/busybox:stable
	I1011 20:58:58.535496  876623 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:58.535520  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1011 20:58:58.535586  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:58:58.546958  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.554551  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.558017  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.562208  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.575330  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:58:58.597892  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	W1011 20:58:58.604797  876623 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1011 20:58:58.604828  876623 retry.go:31] will retry after 268.680635ms: ssh: handshake failed: EOF
	I1011 20:58:58.673075  876623 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1011 20:58:58.673333  876623 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 20:58:58.993931  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1011 20:58:59.022773  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1011 20:58:59.104690  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 20:58:59.117610  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1011 20:58:59.134588  876623 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1011 20:58:59.134613  876623 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1011 20:58:59.138351  876623 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 20:58:59.138376  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1011 20:58:59.150310  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1011 20:58:59.209254  876623 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:59.209275  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1011 20:58:59.214695  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1011 20:58:59.219953  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1011 20:58:59.224982  876623 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1011 20:58:59.225010  876623 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1011 20:58:59.237113  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 20:58:59.283757  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1011 20:58:59.283783  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1011 20:58:59.366580  876623 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1011 20:58:59.366615  876623 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1011 20:58:59.413841  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1011 20:58:59.456369  876623 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1011 20:58:59.456437  876623 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1011 20:58:59.486854  876623 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 20:58:59.486925  876623 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 20:58:59.581753  876623 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1011 20:58:59.581841  876623 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1011 20:58:59.679178  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1011 20:58:59.679263  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1011 20:58:59.701733  876623 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:58:59.701808  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1011 20:58:59.705492  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1011 20:58:59.845098  876623 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1011 20:58:59.845176  876623 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1011 20:58:59.911900  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1011 20:58:59.911989  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1011 20:58:59.994728  876623 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:58:59.994803  876623 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 20:59:00.072964  876623 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1011 20:59:00.073053  876623 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1011 20:59:00.173062  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1011 20:59:00.342215  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1011 20:59:00.342327  876623 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1011 20:59:00.367033  876623 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:00.367116  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1011 20:59:00.574505  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1011 20:59:00.574586  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1011 20:59:00.690016  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 20:59:00.879767  876623 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:00.879838  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1011 20:59:00.886450  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1011 20:59:00.952666  876623 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.279306116s)
	I1011 20:59:00.952731  876623 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.279631989s)
	I1011 20:59:00.952907  876623 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1011 20:59:00.954601  876623 node_ready.go:35] waiting up to 6m0s for node "addons-652898" to be "Ready" ...
	I1011 20:59:00.959422  876623 node_ready.go:49] node "addons-652898" has status "Ready":"True"
	I1011 20:59:00.959445  876623 node_ready.go:38] duration metric: took 4.59345ms for node "addons-652898" to be "Ready" ...
	I1011 20:59:00.959455  876623 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:00.980570  876623 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:01.224920  876623 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1011 20:59:01.225009  876623 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1011 20:59:01.458051  876623 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-652898" context rescaled to 1 replicas
	I1011 20:59:01.532773  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1011 20:59:01.532843  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1011 20:59:01.637392  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:01.880508  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1011 20:59:01.880587  876623 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1011 20:59:02.120792  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1011 20:59:02.120872  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1011 20:59:02.240818  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1011 20:59:02.240890  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1011 20:59:02.633740  876623 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:02.633825  876623 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1011 20:59:02.837720  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1011 20:59:03.068633  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:05.486936  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:05.509598  876623 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1011 20:59:05.509748  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:59:05.564352  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:59:06.078559  876623 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1011 20:59:06.328788  876623 addons.go:234] Setting addon gcp-auth=true in "addons-652898"
	I1011 20:59:06.328888  876623 host.go:66] Checking if "addons-652898" exists ...
	I1011 20:59:06.329453  876623 cli_runner.go:164] Run: docker container inspect addons-652898 --format={{.State.Status}}
	I1011 20:59:06.362540  876623 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1011 20:59:06.362597  876623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652898
	I1011 20:59:06.395871  876623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/addons-652898/id_rsa Username:docker}
	I1011 20:59:07.997027  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:09.423577  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.318802132s)
	I1011 20:59:09.423639  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.306003412s)
	I1011 20:59:09.423678  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.273346429s)
	I1011 20:59:09.423706  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.2089901s)
	I1011 20:59:09.423798  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.20382684s)
	I1011 20:59:09.423811  876623 addons.go:475] Verifying addon ingress=true in "addons-652898"
	I1011 20:59:09.423968  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (10.400665731s)
	I1011 20:59:09.424017  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.186872526s)
	I1011 20:59:09.424249  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.01033216s)
	I1011 20:59:09.424348  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.718786037s)
	I1011 20:59:09.424387  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.251234756s)
	I1011 20:59:09.424400  876623 addons.go:475] Verifying addon registry=true in "addons-652898"
	I1011 20:59:09.424687  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.734597557s)
	I1011 20:59:09.424708  876623 addons.go:475] Verifying addon metrics-server=true in "addons-652898"
	I1011 20:59:09.424760  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.538248377s)
	I1011 20:59:09.425037  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.431033768s)
	I1011 20:59:09.424884  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.78740465s)
	W1011 20:59:09.425075  876623 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:09.425090  876623 retry.go:31] will retry after 141.55083ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1011 20:59:09.426652  876623 out.go:177] * Verifying registry addon...
	I1011 20:59:09.426745  876623 out.go:177] * Verifying ingress addon...
	I1011 20:59:09.426767  876623 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-652898 service yakd-dashboard -n yakd-dashboard
	
	I1011 20:59:09.429206  876623 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1011 20:59:09.430105  876623 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1011 20:59:09.466343  876623 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1011 20:59:09.467306  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.468133  876623 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1011 20:59:09.468152  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1011 20:59:09.509102  876623 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1011 20:59:09.567496  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1011 20:59:09.946072  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:09.946591  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:09.953638  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.115823995s)
	I1011 20:59:09.953690  876623 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-652898"
	I1011 20:59:09.953891  876623 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.591329836s)
	I1011 20:59:09.962792  876623 out.go:177] * Verifying csi-hostpath-driver addon...
	I1011 20:59:09.962866  876623 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1011 20:59:09.964895  876623 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1011 20:59:09.965375  876623 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1011 20:59:09.966959  876623 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1011 20:59:09.966986  876623 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1011 20:59:10.025697  876623 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1011 20:59:10.025735  876623 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1011 20:59:10.044674  876623 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1011 20:59:10.044711  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.107227  876623 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:10.107287  876623 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1011 20:59:10.189819  876623 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1011 20:59:10.435968  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.436589  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.480435  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:10.492894  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:10.935098  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:10.936417  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:10.970352  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.188641  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.621089615s)
	I1011 20:59:11.440270  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.448118  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.471454  876623 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.281588793s)
	I1011 20:59:11.474728  876623 addons.go:475] Verifying addon gcp-auth=true in "addons-652898"
	I1011 20:59:11.478491  876623 out.go:177] * Verifying gcp-auth addon...
	I1011 20:59:11.481197  876623 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1011 20:59:11.537082  876623 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 20:59:11.538763  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:11.936512  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:11.938220  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:11.971419  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.437186  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.438601  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.537454  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.935747  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:12.937259  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:12.971441  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:12.991226  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:13.433443  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.435119  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.470872  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:13.935326  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:13.935826  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:13.971036  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.435542  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.436307  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.469637  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:14.934226  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:14.935526  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:14.970035  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.433442  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.435077  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.472030  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:15.486836  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:15.934395  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:15.935679  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:15.970374  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.433274  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.435215  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.480072  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:16.936121  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:16.937503  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:16.970842  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.435883  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.437221  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.471352  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:17.489548  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:17.935289  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:17.936371  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:17.969916  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.433337  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.435841  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.471235  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:18.935359  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:18.935733  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:18.970158  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.434673  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.435067  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.471402  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:19.489894  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:19.933523  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:19.935408  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:19.970338  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.434553  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.435766  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.471008  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:20.964400  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:20.965962  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:20.970110  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.433686  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.434713  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:21.469950  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:21.492907  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:21.936653  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:21.937703  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.037775  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.434153  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.435212  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.470872  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:22.935665  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:22.935702  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:22.970996  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.438444  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.438618  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.476177  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:23.936446  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:23.937500  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:23.989282  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:24.037649  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.435477  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.436427  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.536795  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:24.934092  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:24.935297  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:24.969712  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.436464  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.437520  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:25.470181  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:25.934442  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:25.936563  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.036969  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.444334  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.448539  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.491820  876623 pod_ready.go:103] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"False"
	I1011 20:59:26.542589  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:26.977886  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:26.978209  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:26.993782  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.453865  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.454655  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.477633  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:27.962132  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:27.967519  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:27.988336  876623 pod_ready.go:93] pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:27.988414  876623 pod_ready.go:82] duration metric: took 27.007763512s for pod "coredns-7c65d6cfc9-6k5hf" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:27.988441  876623 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gtrpz" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:27.990664  876623 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-gtrpz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gtrpz" not found
	I1011 20:59:27.990740  876623 pod_ready.go:82] duration metric: took 2.274702ms for pod "coredns-7c65d6cfc9-gtrpz" in "kube-system" namespace to be "Ready" ...
	E1011 20:59:27.990775  876623 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-gtrpz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gtrpz" not found
	I1011 20:59:27.990799  876623 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.010012  876623 pod_ready.go:93] pod "etcd-addons-652898" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:28.010102  876623 pod_ready.go:82] duration metric: took 19.261133ms for pod "etcd-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.010138  876623 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.029620  876623 pod_ready.go:93] pod "kube-apiserver-addons-652898" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:28.029695  876623 pod_ready.go:82] duration metric: took 19.534936ms for pod "kube-apiserver-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.029724  876623 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.053224  876623 pod_ready.go:93] pod "kube-controller-manager-addons-652898" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:28.053308  876623 pod_ready.go:82] duration metric: took 23.560431ms for pod "kube-controller-manager-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.053339  876623 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g2cdn" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.067051  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.184802  876623 pod_ready.go:93] pod "kube-proxy-g2cdn" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:28.184875  876623 pod_ready.go:82] duration metric: took 131.500914ms for pod "kube-proxy-g2cdn" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.184903  876623 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.450472  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.452021  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.472766  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.587083  876623 pod_ready.go:93] pod "kube-scheduler-addons-652898" in "kube-system" namespace has status "Ready":"True"
	I1011 20:59:28.587178  876623 pod_ready.go:82] duration metric: took 402.250711ms for pod "kube-scheduler-addons-652898" in "kube-system" namespace to be "Ready" ...
	I1011 20:59:28.587212  876623 pod_ready.go:39] duration metric: took 27.627736573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 20:59:28.587286  876623 api_server.go:52] waiting for apiserver process to appear ...
	I1011 20:59:28.587436  876623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 20:59:28.621269  876623 api_server.go:72] duration metric: took 30.686854213s to wait for apiserver process to appear ...
	I1011 20:59:28.621373  876623 api_server.go:88] waiting for apiserver healthz status ...
	I1011 20:59:28.621425  876623 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1011 20:59:28.632259  876623 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1011 20:59:28.636573  876623 api_server.go:141] control plane version: v1.31.1
	I1011 20:59:28.636663  876623 api_server.go:131] duration metric: took 15.261598ms to wait for apiserver health ...
	I1011 20:59:28.636689  876623 system_pods.go:43] waiting for kube-system pods to appear ...
	I1011 20:59:28.796411  876623 system_pods.go:59] 18 kube-system pods found
	I1011 20:59:28.796531  876623 system_pods.go:61] "coredns-7c65d6cfc9-6k5hf" [d63dcaee-7299-4c92-8b8c-01f76da4a984] Running
	I1011 20:59:28.796567  876623 system_pods.go:61] "csi-hostpath-attacher-0" [0bba68a4-5159-4ed2-b190-97db2d48ad60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:28.796612  876623 system_pods.go:61] "csi-hostpath-resizer-0" [ba2713a9-3a15-4ae2-b9b2-4afbaee1b47c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:28.796657  876623 system_pods.go:61] "csi-hostpathplugin-pftvs" [7dfca49b-9f2c-43c3-a371-810514a311bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:28.796686  876623 system_pods.go:61] "etcd-addons-652898" [35179049-ba4c-458b-8dfa-ddae813ec2d1] Running
	I1011 20:59:28.796724  876623 system_pods.go:61] "kindnet-xdcs2" [a1449168-3977-4deb-a326-f9cdf9d3c465] Running
	I1011 20:59:28.796750  876623 system_pods.go:61] "kube-apiserver-addons-652898" [362be9b7-d6c5-450f-9a51-451fbb5724eb] Running
	I1011 20:59:28.796792  876623 system_pods.go:61] "kube-controller-manager-addons-652898" [3331f9ad-57d4-4435-89fd-83054053e18d] Running
	I1011 20:59:28.796824  876623 system_pods.go:61] "kube-ingress-dns-minikube" [8024ad8e-70f1-4188-b2af-abf59a60a1b6] Running
	I1011 20:59:28.796848  876623 system_pods.go:61] "kube-proxy-g2cdn" [52420807-cd36-4f46-93d4-e5256aa85489] Running
	I1011 20:59:28.796891  876623 system_pods.go:61] "kube-scheduler-addons-652898" [72478c1c-1ed3-4326-adc7-b3d36956741d] Running
	I1011 20:59:28.796921  876623 system_pods.go:61] "metrics-server-84c5f94fbc-qb7lm" [a5df8f56-ef8b-4756-bcd0-d8d56155142f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:28.796951  876623 system_pods.go:61] "nvidia-device-plugin-daemonset-rkj87" [99c05700-f3f9-40c0-a106-a77ad2e167e3] Running
	I1011 20:59:28.796986  876623 system_pods.go:61] "registry-66c9cd494c-vzmnb" [f153018e-71f1-4acd-b221-2ba610df9d84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:28.797013  876623 system_pods.go:61] "registry-proxy-mmdzc" [432e8bfd-ef31-4233-8cd7-b002f2475dae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:28.797055  876623 system_pods.go:61] "snapshot-controller-56fcc65765-9jl2b" [9d066dee-1dbe-4dd4-9e30-47f83aeb376f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:28.797092  876623 system_pods.go:61] "snapshot-controller-56fcc65765-hrclg" [6cd73381-98f1-417c-8076-c53b7c09e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:28.797117  876623 system_pods.go:61] "storage-provisioner" [085f39f0-549c-41c9-93f2-a276fb45bdc6] Running
	I1011 20:59:28.797154  876623 system_pods.go:74] duration metric: took 160.392753ms to wait for pod list to return data ...
	I1011 20:59:28.797189  876623 default_sa.go:34] waiting for default service account to be created ...
	I1011 20:59:28.945437  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:28.946826  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:28.972779  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:28.986404  876623 default_sa.go:45] found service account: "default"
	I1011 20:59:28.986434  876623 default_sa.go:55] duration metric: took 189.223471ms for default service account to be created ...
	I1011 20:59:28.986445  876623 system_pods.go:116] waiting for k8s-apps to be running ...
	I1011 20:59:29.193279  876623 system_pods.go:86] 18 kube-system pods found
	I1011 20:59:29.193331  876623 system_pods.go:89] "coredns-7c65d6cfc9-6k5hf" [d63dcaee-7299-4c92-8b8c-01f76da4a984] Running
	I1011 20:59:29.193386  876623 system_pods.go:89] "csi-hostpath-attacher-0" [0bba68a4-5159-4ed2-b190-97db2d48ad60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1011 20:59:29.193404  876623 system_pods.go:89] "csi-hostpath-resizer-0" [ba2713a9-3a15-4ae2-b9b2-4afbaee1b47c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1011 20:59:29.193414  876623 system_pods.go:89] "csi-hostpathplugin-pftvs" [7dfca49b-9f2c-43c3-a371-810514a311bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1011 20:59:29.193423  876623 system_pods.go:89] "etcd-addons-652898" [35179049-ba4c-458b-8dfa-ddae813ec2d1] Running
	I1011 20:59:29.193429  876623 system_pods.go:89] "kindnet-xdcs2" [a1449168-3977-4deb-a326-f9cdf9d3c465] Running
	I1011 20:59:29.193451  876623 system_pods.go:89] "kube-apiserver-addons-652898" [362be9b7-d6c5-450f-9a51-451fbb5724eb] Running
	I1011 20:59:29.193471  876623 system_pods.go:89] "kube-controller-manager-addons-652898" [3331f9ad-57d4-4435-89fd-83054053e18d] Running
	I1011 20:59:29.193487  876623 system_pods.go:89] "kube-ingress-dns-minikube" [8024ad8e-70f1-4188-b2af-abf59a60a1b6] Running
	I1011 20:59:29.193497  876623 system_pods.go:89] "kube-proxy-g2cdn" [52420807-cd36-4f46-93d4-e5256aa85489] Running
	I1011 20:59:29.193502  876623 system_pods.go:89] "kube-scheduler-addons-652898" [72478c1c-1ed3-4326-adc7-b3d36956741d] Running
	I1011 20:59:29.193508  876623 system_pods.go:89] "metrics-server-84c5f94fbc-qb7lm" [a5df8f56-ef8b-4756-bcd0-d8d56155142f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1011 20:59:29.193517  876623 system_pods.go:89] "nvidia-device-plugin-daemonset-rkj87" [99c05700-f3f9-40c0-a106-a77ad2e167e3] Running
	I1011 20:59:29.193524  876623 system_pods.go:89] "registry-66c9cd494c-vzmnb" [f153018e-71f1-4acd-b221-2ba610df9d84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1011 20:59:29.193530  876623 system_pods.go:89] "registry-proxy-mmdzc" [432e8bfd-ef31-4233-8cd7-b002f2475dae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1011 20:59:29.193538  876623 system_pods.go:89] "snapshot-controller-56fcc65765-9jl2b" [9d066dee-1dbe-4dd4-9e30-47f83aeb376f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:29.193549  876623 system_pods.go:89] "snapshot-controller-56fcc65765-hrclg" [6cd73381-98f1-417c-8076-c53b7c09e09b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1011 20:59:29.193564  876623 system_pods.go:89] "storage-provisioner" [085f39f0-549c-41c9-93f2-a276fb45bdc6] Running
	I1011 20:59:29.193578  876623 system_pods.go:126] duration metric: took 207.126597ms to wait for k8s-apps to be running ...
	I1011 20:59:29.193596  876623 system_svc.go:44] waiting for kubelet service to be running ....
	I1011 20:59:29.193678  876623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 20:59:29.206056  876623 system_svc.go:56] duration metric: took 12.458406ms WaitForService to wait for kubelet
	I1011 20:59:29.206081  876623 kubeadm.go:582] duration metric: took 31.271679416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 20:59:29.206101  876623 node_conditions.go:102] verifying NodePressure condition ...
	I1011 20:59:29.384763  876623 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1011 20:59:29.384798  876623 node_conditions.go:123] node cpu capacity is 2
	I1011 20:59:29.384813  876623 node_conditions.go:105] duration metric: took 178.706478ms to run NodePressure ...
	I1011 20:59:29.384830  876623 start.go:241] waiting for startup goroutines ...
	I1011 20:59:29.435001  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.435742  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.536039  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:29.936738  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:29.937703  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:29.970470  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.435479  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.436966  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:30.536250  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:30.943634  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:30.944858  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.042791  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.435510  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.436436  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.470498  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:31.938181  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:31.940889  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:31.970759  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.435242  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.436192  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.470414  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:32.940206  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:32.941317  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:32.971464  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.434943  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.435229  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.478573  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:33.944744  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:33.946540  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:33.971407  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.433954  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:34.434854  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.470516  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:34.939857  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:34.940475  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1011 20:59:35.039538  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.437402  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:35.438608  876623 kapi.go:107] duration metric: took 26.009403006s to wait for kubernetes.io/minikube-addons=registry ...
	I1011 20:59:35.471369  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:35.935605  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.037851  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.436532  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.470967  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:36.935040  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:36.970153  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.434829  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.470891  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:37.937084  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:37.971698  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.441588  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.471429  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:38.935776  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:38.970333  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.434859  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.471560  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:39.935047  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:39.971970  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.435322  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.470559  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:40.935713  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:40.971686  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.435109  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.471257  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:41.934710  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:41.975841  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.435405  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.536403  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:42.934142  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:42.970600  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.434075  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.471562  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:43.937289  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:43.970677  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.434608  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.470372  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:44.935465  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:44.971208  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.438811  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.477542  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:45.934496  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:45.970857  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.434706  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.470942  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:46.934809  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:46.970611  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.438005  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.537065  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:47.934343  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:47.969737  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.435393  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.471421  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:48.935822  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:48.970887  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.435288  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.470657  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:49.934625  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:49.970440  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.435279  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.470738  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:50.935730  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:50.970453  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.435829  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:51.470871  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:51.934309  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.034003  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.437433  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.470821  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:52.934802  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:52.970136  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.434773  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.470127  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:53.935427  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:53.971265  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.435414  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.469913  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:54.934846  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:54.971557  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.434616  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:55.476479  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:55.934895  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.036704  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.435479  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.470211  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:56.934782  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:56.970970  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.434254  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.470589  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:57.935051  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:57.971277  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.436278  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:58.470904  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:58.935862  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.037963  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.436220  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.472508  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 20:59:59.934702  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 20:59:59.971243  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.661352  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.663570  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:00.968267  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:00.999143  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.447437  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.472056  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:01.944279  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:01.970807  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1011 21:00:02.435718  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:02.470957  876623 kapi.go:107] duration metric: took 52.505575295s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1011 21:00:02.935387  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.434515  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:03.934407  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.435100  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:04.935584  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.434911  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:05.935258  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.435029  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:06.934949  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.434586  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:07.934674  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.442750  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:08.935025  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.435668  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:09.934990  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.435401  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:10.934787  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.434885  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:11.934999  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.434764  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:12.935475  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.435101  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:13.934174  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.434355  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:14.936010  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.436197  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:15.935253  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.435837  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:16.935714  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.435641  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:17.935604  876623 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1011 21:00:18.436915  876623 kapi.go:107] duration metric: took 1m9.00680434s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1011 21:00:33.492151  876623 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1011 21:00:33.492182  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:33.984875  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.485275  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:34.989517  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.485651  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:35.986033  876623 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1011 21:00:36.484755  876623 kapi.go:107] duration metric: took 1m25.003556121s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1011 21:00:36.486804  876623 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-652898 cluster.
	I1011 21:00:36.489064  876623 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1011 21:00:36.490750  876623 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1011 21:00:36.492678  876623 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, metrics-server, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1011 21:00:36.494188  876623 addons.go:510] duration metric: took 1m38.559396515s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns cloud-spanner amd-gpu-device-plugin inspektor-gadget metrics-server volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1011 21:00:36.494243  876623 start.go:246] waiting for cluster config update ...
	I1011 21:00:36.494311  876623 start.go:255] writing updated cluster config ...
	I1011 21:00:36.494605  876623 ssh_runner.go:195] Run: rm -f paused
	I1011 21:00:36.843214  876623 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1011 21:00:36.845523  876623 out.go:177] * Done! kubectl is now configured to use "addons-652898" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a2629be60dbdd       9c8d328e7d9e8       3 minutes ago       Running             gcp-auth                                 0                   00326f2ed5350       gcp-auth-c684cb797-9lgt6
	a959253edc131       2d37f5a3dd01b       3 minutes ago       Running             controller                               0                   b86af409db472       ingress-nginx-controller-5f85ff4588-nks6g
	33c72436e136f       ee6d597e62dc8       3 minutes ago       Running             csi-snapshotter                          0                   393722c363433       csi-hostpathplugin-pftvs
	230bd0d5f6fe0       642ded511e141       3 minutes ago       Running             csi-provisioner                          0                   393722c363433       csi-hostpathplugin-pftvs
	28c59393a21f8       922312104da8a       3 minutes ago       Running             liveness-probe                           0                   393722c363433       csi-hostpathplugin-pftvs
	228bcf4835cf4       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   393722c363433       csi-hostpathplugin-pftvs
	7d5096a239d1b       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   393722c363433       csi-hostpathplugin-pftvs
	848b0e2854958       1a9605c872c1d       4 minutes ago       Running             admission                                0                   63e4e9d41d757       volcano-admission-5874dfdd79-lx7gl
	967e85005967c       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   24cee3985e70d       volcano-scheduler-6c9778cbdf-w9lfw
	9e62f8690eb2c       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   1cb4714833fd4       csi-hostpath-attacher-0
	9c45e22b642eb       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   be5fbd7d524b8       csi-hostpath-resizer-0
	9f08b9772796c       d54655ed3a854       4 minutes ago       Exited              patch                                    0                   d5de737f581ca       ingress-nginx-admission-patch-2h9zm
	de8cd3e71a9fb       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   9e780b87e67f2       volcano-controllers-789ffc5785-6wjzc
	dc48bd477b16a       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   393722c363433       csi-hostpathplugin-pftvs
	269ec1040a2e0       d54655ed3a854       4 minutes ago       Exited              create                                   0                   35eb6178f64e1       ingress-nginx-admission-create-b8rtx
	20bf570bf94ac       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   1c90efbc72630       snapshot-controller-56fcc65765-hrclg
	ca3ce78f55166       77bdba588b953       4 minutes ago       Running             yakd                                     0                   ccb3660617968       yakd-dashboard-67d98fc6b-4cxxt
	b5bcf9a56e2f9       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   12179ed46b5da       snapshot-controller-56fcc65765-9jl2b
	206aa156309ce       c9cf76bb104e1       4 minutes ago       Running             registry                                 0                   bf6295a9e7dfc       registry-66c9cd494c-vzmnb
	2dd533cd210c6       68de1ddeaded8       4 minutes ago       Running             gadget                                   0                   7a5c4219707ec       gadget-84nrf
	e0d7a877d2b64       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   45efc8dcfb040       local-path-provisioner-86d989889c-bcc5v
	85a461cc456d4       434d64ac3dbf3       4 minutes ago       Running             registry-proxy                           0                   58c9f807e3cc0       registry-proxy-mmdzc
	7f5a1d46ed6f2       2f6c962e7b831       4 minutes ago       Running             coredns                                  0                   a95621f9570ec       coredns-7c65d6cfc9-6k5hf
	aa22b63872889       5548a49bb60ba       4 minutes ago       Running             metrics-server                           0                   2b3d1de75d4c4       metrics-server-84c5f94fbc-qb7lm
	4dcc57ad4acd5       be9cac3585579       4 minutes ago       Running             cloud-spanner-emulator                   0                   ac8b9e4f71aad       cloud-spanner-emulator-5b584cc74-fk7r7
	3270c2d6aaf50       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ea3de86903774       nvidia-device-plugin-daemonset-rkj87
	41048e3cc684d       35508c2f890c4       4 minutes ago       Running             minikube-ingress-dns                     0                   b6d32871859d9       kube-ingress-dns-minikube
	c894c0df3ac41       ba04bb24b9575       4 minutes ago       Running             storage-provisioner                      0                   eaabc30b73537       storage-provisioner
	34febf3a40913       0bcd66b03df5f       4 minutes ago       Running             kindnet-cni                              0                   a91d5cabbad64       kindnet-xdcs2
	b7c2a00ced020       24a140c548c07       4 minutes ago       Running             kube-proxy                               0                   988f0ade82a91       kube-proxy-g2cdn
	dbb7deaa41bac       d3f53a98c0a9d       5 minutes ago       Running             kube-apiserver                           0                   2912ae46e5f0e       kube-apiserver-addons-652898
	b6b8bf0fd9c36       279f381cb3736       5 minutes ago       Running             kube-controller-manager                  0                   c6ea0ba6f0f28       kube-controller-manager-addons-652898
	fbbbceab379f2       27e3830e14027       5 minutes ago       Running             etcd                                     0                   98fa31e168805       etcd-addons-652898
	4da45974574cd       7f8aa378bb47d       5 minutes ago       Running             kube-scheduler                           0                   9cbc8a9458369       kube-scheduler-addons-652898
	
	
	==> containerd <==
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.659173245Z" level=info msg="TearDown network for sandbox \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\" successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.659211316Z" level=info msg="StopPodSandbox for \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\" returns successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.659780552Z" level=info msg="RemovePodSandbox for \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\""
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.659826386Z" level=info msg="Forcibly stopping sandbox \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\""
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.667674071Z" level=info msg="TearDown network for sandbox \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\" successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.675223329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.675363981Z" level=info msg="RemovePodSandbox \"fe647b281b2ec396510b028818afcf462fdbabbe710b8bac8c27bb9ab072418d\" returns successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.676224981Z" level=info msg="StopPodSandbox for \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\""
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.683832381Z" level=info msg="TearDown network for sandbox \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\" successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.683872397Z" level=info msg="StopPodSandbox for \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\" returns successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.684473419Z" level=info msg="RemovePodSandbox for \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\""
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.684526121Z" level=info msg="Forcibly stopping sandbox \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\""
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.691816377Z" level=info msg="TearDown network for sandbox \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\" successfully"
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.702914588Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 11 21:00:52 addons-652898 containerd[820]: time="2024-10-11T21:00:52.703058374Z" level=info msg="RemovePodSandbox \"c5728d3b897239845cedf303eb882cbda0be505b5fa57ff61247a6ef5aea706f\" returns successfully"
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.707271383Z" level=info msg="RemoveContainer for \"db7d3b142857e418d7b3664a36f6bcc6e45d95e4c833b5bd722f632e695b51ab\""
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.714161421Z" level=info msg="RemoveContainer for \"db7d3b142857e418d7b3664a36f6bcc6e45d95e4c833b5bd722f632e695b51ab\" returns successfully"
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.716181078Z" level=info msg="StopPodSandbox for \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\""
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.723837241Z" level=info msg="TearDown network for sandbox \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\" successfully"
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.724027943Z" level=info msg="StopPodSandbox for \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\" returns successfully"
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.724741941Z" level=info msg="RemovePodSandbox for \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\""
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.724864657Z" level=info msg="Forcibly stopping sandbox \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\""
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.732734004Z" level=info msg="TearDown network for sandbox \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\" successfully"
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.742837358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 11 21:01:52 addons-652898 containerd[820]: time="2024-10-11T21:01:52.742953509Z" level=info msg="RemovePodSandbox \"597dd547f9c08de91cd3466a93dd5ccbd49c5380f6c828574b3d37c0e6f8cc15\" returns successfully"
	
	
	==> coredns [7f5a1d46ed6f2ff6e99b2700e6b863f825b41d45c85d32703c0f6da07262dccb] <==
	[INFO] 10.244.0.5:60268 - 16826 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055606s
	[INFO] 10.244.0.5:60268 - 62928 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001415787s
	[INFO] 10.244.0.5:60268 - 32674 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001697278s
	[INFO] 10.244.0.5:60268 - 32706 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000056451s
	[INFO] 10.244.0.5:60268 - 20997 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000056722s
	[INFO] 10.244.0.5:42293 - 27994 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101833s
	[INFO] 10.244.0.5:42293 - 28207 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003813s
	[INFO] 10.244.0.5:54014 - 34468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000608s
	[INFO] 10.244.0.5:54014 - 34656 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035175s
	[INFO] 10.244.0.5:44064 - 32510 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038859s
	[INFO] 10.244.0.5:44064 - 32682 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036964s
	[INFO] 10.244.0.5:51184 - 10351 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001486695s
	[INFO] 10.244.0.5:51184 - 10564 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001368156s
	[INFO] 10.244.0.5:39261 - 48906 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067725s
	[INFO] 10.244.0.5:39261 - 49342 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037464s
	[INFO] 10.244.0.25:38417 - 15170 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194969s
	[INFO] 10.244.0.25:55011 - 24003 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182235s
	[INFO] 10.244.0.25:53484 - 40370 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174572s
	[INFO] 10.244.0.25:47829 - 65087 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000140356s
	[INFO] 10.244.0.25:46081 - 1281 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117349s
	[INFO] 10.244.0.25:52302 - 19883 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134662s
	[INFO] 10.244.0.25:45299 - 17388 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002136095s
	[INFO] 10.244.0.25:43341 - 60859 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002286396s
	[INFO] 10.244.0.25:33335 - 54664 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001108843s
	[INFO] 10.244.0.25:38936 - 10538 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002040046s
	
	
	==> describe nodes <==
	Name:               addons-652898
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-652898
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=addons-652898
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T20_58_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-652898
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-652898"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 20:58:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-652898
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:03:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:00:55 +0000   Fri, 11 Oct 2024 20:58:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:00:55 +0000   Fri, 11 Oct 2024 20:58:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:00:55 +0000   Fri, 11 Oct 2024 20:58:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:00:55 +0000   Fri, 11 Oct 2024 20:58:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-652898
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 953b5a4a7a9841868087e45e8f0411a3
	  System UUID:                b857a321-ae85-441a-9d2e-cecc4408459a
	  Boot ID:                    d161fc74-b16f-4a64-ba04-769b77a65402
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-fk7r7       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  gadget                      gadget-84nrf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  gcp-auth                    gcp-auth-c684cb797-9lgt6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-nks6g    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m48s
	  kube-system                 coredns-7c65d6cfc9-6k5hf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-pftvs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 etcd-addons-652898                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m3s
	  kube-system                 kindnet-xdcs2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-652898                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-652898        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-g2cdn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-652898                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 metrics-server-84c5f94fbc-qb7lm              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-rkj87         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-66c9cd494c-vzmnb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-mmdzc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 snapshot-controller-56fcc65765-9jl2b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 snapshot-controller-56fcc65765-hrclg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-86d989889c-bcc5v      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  volcano-system              volcano-admission-5874dfdd79-lx7gl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  volcano-system              volcano-controllers-789ffc5785-6wjzc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  volcano-system              volcano-scheduler-6c9778cbdf-w9lfw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-4cxxt               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m55s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 5m10s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node addons-652898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x7 over 5m10s)  kubelet          Node addons-652898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node addons-652898 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Normal   Starting                 5m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m3s                   kubelet          Node addons-652898 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s                   kubelet          Node addons-652898 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s                   kubelet          Node addons-652898 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m58s                  node-controller  Node addons-652898 event: Registered Node addons-652898 in Controller
	
	
	==> dmesg <==
	[Oct11 18:36] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [fbbbceab379f27554ed59666bd5244b724886dabb4e9c5a1fdfcb2abf0abca31] <==
	{"level":"info","ts":"2024-10-11T20:58:46.615677Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-11T20:58:46.615773Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-11T20:58:46.610122Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-11T20:58:46.616418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-10-11T20:58:46.618404Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-10-11T20:58:47.571135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-11T20:58:47.571360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-11T20:58:47.571504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-11T20:58:47.571592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-11T20:58:47.571681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-11T20:58:47.571756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-11T20:58:47.571844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-11T20:58:47.574456Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-652898 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-11T20:58:47.574804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T20:58:47.575276Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T20:58:47.578302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-11T20:58:47.578513Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T20:58:47.590360Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T20:58:47.590463Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T20:58:47.590494Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-11T20:58:47.591195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-11T20:58:47.598917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-11T20:58:47.599786Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-11T20:58:47.599852Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-11T20:58:47.599869Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [a2629be60dbdd942e265474a0e0ea4fe7c52d0dd941a6c213abba213bc1cfdc8] <==
	2024/10/11 21:00:36 GCP Auth Webhook started!
	2024/10/11 21:00:53 Ready to marshal response ...
	2024/10/11 21:00:53 Ready to write response ...
	2024/10/11 21:00:54 Ready to marshal response ...
	2024/10/11 21:00:54 Ready to write response ...
	
	
	==> kernel <==
	 21:03:55 up  4:46,  0 users,  load average: 0.20, 0.92, 0.77
	Linux addons-652898 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [34febf3a40913950c7de4fb8bf1fde40f0cb200d465b34a19e22cc904f13fc79] <==
	I1011 21:01:51.910413       1 main.go:300] handling current node
	I1011 21:02:01.903424       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:01.903461       1 main.go:300] handling current node
	I1011 21:02:11.909142       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:11.909229       1 main.go:300] handling current node
	I1011 21:02:21.903335       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:21.903437       1 main.go:300] handling current node
	I1011 21:02:31.903133       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:31.903235       1 main.go:300] handling current node
	I1011 21:02:41.906715       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:41.906755       1 main.go:300] handling current node
	I1011 21:02:51.909539       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:02:51.909580       1 main.go:300] handling current node
	I1011 21:03:01.903647       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:01.903683       1 main.go:300] handling current node
	I1011 21:03:11.905041       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:11.905083       1 main.go:300] handling current node
	I1011 21:03:21.911466       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:21.911498       1 main.go:300] handling current node
	I1011 21:03:31.912480       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:31.912513       1 main.go:300] handling current node
	I1011 21:03:41.909171       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:41.909207       1 main.go:300] handling current node
	I1011 21:03:51.911954       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1011 21:03:51.911986       1 main.go:300] handling current node
	
	
	==> kube-apiserver [dbb7deaa41bac2b30966b94ab7248e6a4d346ba854c9b24d6faa962acdb2f171] <==
	E1011 20:59:34.881574       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W1011 20:59:42.454760       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 20:59:42.454801       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	W1011 20:59:42.456905       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:42.502580       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 20:59:42.502619       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	W1011 20:59:42.504230       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:51.664854       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:52.467432       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 20:59:52.467478       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	W1011 20:59:52.469609       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:52.704149       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:53.729147       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:54.826646       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:55.925581       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:56.959306       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 20:59:58.042419       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.136.43:443: connect: connection refused
	W1011 21:00:14.467249       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 21:00:14.467287       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	W1011 21:00:14.514950       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 21:00:14.514989       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	W1011 21:00:33.435207       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.173.22:443: connect: connection refused
	E1011 21:00:33.435248       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.173.22:443: connect: connection refused" logger="UnhandledError"
	I1011 21:00:53.374756       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1011 21:00:53.427343       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [b6b8bf0fd9c365d864f9f39481573f41038c28925ca01527eaa4edd37528b16f] <==
	I1011 21:00:17.473561       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1011 21:00:18.212838       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1011 21:00:18.248100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="81.279µs"
	I1011 21:00:18.337401       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1011 21:00:18.480335       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1011 21:00:18.495887       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1011 21:00:18.502432       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1011 21:00:19.219816       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1011 21:00:19.232924       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1011 21:00:19.239523       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1011 21:00:24.733123       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-652898"
	I1011 21:00:31.540916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="14.832458ms"
	I1011 21:00:31.541239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="280.45µs"
	I1011 21:00:33.463925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="31.088022ms"
	I1011 21:00:33.476715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="12.749032ms"
	I1011 21:00:33.476801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="47.532µs"
	I1011 21:00:33.495888       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="56.319µs"
	I1011 21:00:36.297893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="9.407726ms"
	I1011 21:00:36.298613       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="31.13µs"
	I1011 21:00:48.034131       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1011 21:00:48.076902       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1011 21:00:49.014569       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1011 21:00:49.044429       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1011 21:00:53.108805       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I1011 21:00:55.335975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-652898"
	
	
	==> kube-proxy [b7c2a00ced02099f3934181720edd66765719f870f838a4edcf4558b5adffe69] <==
	I1011 20:58:59.557156       1 server_linux.go:66] "Using iptables proxy"
	I1011 20:58:59.663024       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1011 20:58:59.663144       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1011 20:58:59.713117       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1011 20:58:59.713174       1 server_linux.go:169] "Using iptables Proxier"
	I1011 20:58:59.715115       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1011 20:58:59.715582       1 server.go:483] "Version info" version="v1.31.1"
	I1011 20:58:59.715602       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1011 20:58:59.718437       1 config.go:199] "Starting service config controller"
	I1011 20:58:59.718481       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1011 20:58:59.718516       1 config.go:105] "Starting endpoint slice config controller"
	I1011 20:58:59.718521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1011 20:58:59.721186       1 config.go:328] "Starting node config controller"
	I1011 20:58:59.721205       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1011 20:58:59.819487       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1011 20:58:59.819543       1 shared_informer.go:320] Caches are synced for service config
	I1011 20:58:59.821996       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4da45974574cd106b2393bda35af452ad81d395dd59d2ed0f8fde0b875b73807] <==
	W1011 20:58:50.298883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 20:58:50.298900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:50.298957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 20:58:50.298973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:50.299173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1011 20:58:50.299203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:50.299279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:58:50.299300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:50.299363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:50.299383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:50.306888       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 20:58:50.306937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.100051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 20:58:51.100093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.276866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 20:58:51.276965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.302478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 20:58:51.302707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.317457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:51.317672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.344760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 20:58:51.344881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1011 20:58:51.535381       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 20:58:51.535430       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1011 20:58:53.777252       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 11 21:00:32 addons-652898 kubelet[1486]: I1011 21:00:32.590386    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-fk7r7" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: E1011 21:00:33.457623    1486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4fe6be5-9aac-4bd0-b155-1a859f9d58f2" containerName="patch"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: E1011 21:00:33.457664    1486 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="800f5a0e-98b6-4167-a742-d84b4177a7ea" containerName="create"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.457727    1486 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4fe6be5-9aac-4bd0-b155-1a859f9d58f2" containerName="patch"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.457738    1486 memory_manager.go:354] "RemoveStaleState removing state" podUID="800f5a0e-98b6-4167-a742-d84b4177a7ea" containerName="create"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.613894    1486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f0e571fe-72f9-4db0-9977-5f2fe52d7c88-webhook-certs\") pod \"gcp-auth-c684cb797-9lgt6\" (UID: \"f0e571fe-72f9-4db0-9977-5f2fe52d7c88\") " pod="gcp-auth/gcp-auth-c684cb797-9lgt6"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.613954    1486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf5l5\" (UniqueName: \"kubernetes.io/projected/f0e571fe-72f9-4db0-9977-5f2fe52d7c88-kube-api-access-wf5l5\") pod \"gcp-auth-c684cb797-9lgt6\" (UID: \"f0e571fe-72f9-4db0-9977-5f2fe52d7c88\") " pod="gcp-auth/gcp-auth-c684cb797-9lgt6"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.613997    1486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0e571fe-72f9-4db0-9977-5f2fe52d7c88-gcp-creds\") pod \"gcp-auth-c684cb797-9lgt6\" (UID: \"f0e571fe-72f9-4db0-9977-5f2fe52d7c88\") " pod="gcp-auth/gcp-auth-c684cb797-9lgt6"
	Oct 11 21:00:33 addons-652898 kubelet[1486]: I1011 21:00:33.614018    1486 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-project\" (UniqueName: \"kubernetes.io/host-path/f0e571fe-72f9-4db0-9977-5f2fe52d7c88-gcp-project\") pod \"gcp-auth-c684cb797-9lgt6\" (UID: \"f0e571fe-72f9-4db0-9977-5f2fe52d7c88\") " pod="gcp-auth/gcp-auth-c684cb797-9lgt6"
	Oct 11 21:00:38 addons-652898 kubelet[1486]: I1011 21:00:38.589840    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mmdzc" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:00:48 addons-652898 kubelet[1486]: I1011 21:00:48.054814    1486 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-c684cb797-9lgt6" podStartSLOduration=12.985435936 podStartE2EDuration="15.05479552s" podCreationTimestamp="2024-10-11 21:00:33 +0000 UTC" firstStartedPulling="2024-10-11 21:00:33.86837354 +0000 UTC m=+101.386712592" lastFinishedPulling="2024-10-11 21:00:35.937733125 +0000 UTC m=+103.456072176" observedRunningTime="2024-10-11 21:00:36.286626444 +0000 UTC m=+103.804965528" watchObservedRunningTime="2024-10-11 21:00:48.05479552 +0000 UTC m=+115.573134572"
	Oct 11 21:00:48 addons-652898 kubelet[1486]: I1011 21:00:48.593524    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800f5a0e-98b6-4167-a742-d84b4177a7ea" path="/var/lib/kubelet/pods/800f5a0e-98b6-4167-a742-d84b4177a7ea/volumes"
	Oct 11 21:00:50 addons-652898 kubelet[1486]: I1011 21:00:50.594478    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4fe6be5-9aac-4bd0-b155-1a859f9d58f2" path="/var/lib/kubelet/pods/a4fe6be5-9aac-4bd0-b155-1a859f9d58f2/volumes"
	Oct 11 21:00:51 addons-652898 kubelet[1486]: I1011 21:00:51.590202    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rkj87" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:00:52 addons-652898 kubelet[1486]: I1011 21:00:52.632445    1486 scope.go:117] "RemoveContainer" containerID="fef52639deba6132ef1b6ca0d750f123ec6481f85d65e1319c1d8ae0582f025d"
	Oct 11 21:00:52 addons-652898 kubelet[1486]: I1011 21:00:52.641024    1486 scope.go:117] "RemoveContainer" containerID="2e6bde910ca24eac596d964265fd8818d63b9f0fa16777359a2961112edeb79a"
	Oct 11 21:00:54 addons-652898 kubelet[1486]: I1011 21:00:54.590216    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-vzmnb" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:00:54 addons-652898 kubelet[1486]: I1011 21:00:54.593869    1486 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a42f39-376a-46a8-9af4-400a8f463168" path="/var/lib/kubelet/pods/f1a42f39-376a-46a8-9af4-400a8f463168/volumes"
	Oct 11 21:01:52 addons-652898 kubelet[1486]: I1011 21:01:52.705606    1486 scope.go:117] "RemoveContainer" containerID="db7d3b142857e418d7b3664a36f6bcc6e45d95e4c833b5bd722f632e695b51ab"
	Oct 11 21:01:58 addons-652898 kubelet[1486]: I1011 21:01:58.592106    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rkj87" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:02:06 addons-652898 kubelet[1486]: I1011 21:02:06.589339    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mmdzc" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:02:07 addons-652898 kubelet[1486]: I1011 21:02:07.589661    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-vzmnb" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:03:03 addons-652898 kubelet[1486]: I1011 21:03:03.589518    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rkj87" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:03:32 addons-652898 kubelet[1486]: I1011 21:03:32.591800    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-vzmnb" secret="" err="secret \"gcp-auth\" not found"
	Oct 11 21:03:35 addons-652898 kubelet[1486]: I1011 21:03:35.590215    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mmdzc" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [c894c0df3ac418bca20f9cba858826d1833a6854a4abeca5d18cd37d080d0fe3] <==
	I1011 20:59:03.587955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 20:59:03.616997       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 20:59:03.617309       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 20:59:03.700812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 20:59:03.701019       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-652898_e8dd1d8a-6d2d-48be-98ce-e33675e2cdcc!
	I1011 20:59:03.702231       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82cbcaab-bc13-4795-ac51-34dee684107d", APIVersion:"v1", ResourceVersion:"535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-652898_e8dd1d8a-6d2d-48be-98ce-e33675e2cdcc became leader
	I1011 20:59:03.802412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-652898_e8dd1d8a-6d2d-48be-98ce-e33675e2cdcc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-652898 -n addons-652898
helpers_test.go:261: (dbg) Run:  kubectl --context addons-652898 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-b8rtx ingress-nginx-admission-patch-2h9zm test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-652898 describe pod ingress-nginx-admission-create-b8rtx ingress-nginx-admission-patch-2h9zm test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-652898 describe pod ingress-nginx-admission-create-b8rtx ingress-nginx-admission-patch-2h9zm test-job-nginx-0: exit status 1 (91.782954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-b8rtx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2h9zm" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-652898 describe pod ingress-nginx-admission-create-b8rtx ingress-nginx-admission-patch-2h9zm test-job-nginx-0: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable volcano --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable volcano --alsologtostderr -v=1: (11.129536271s)
--- FAIL: TestAddons/serial/Volcano (211.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (386s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-310298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1011 21:48:22.563175  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-310298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m21.49549188s)

                                                
                                                
-- stdout --
	* [old-k8s-version-310298] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-310298" primary control-plane node in "old-k8s-version-310298" cluster
	* Pulling base image v0.0.45-1728382586-19774 ...
	* Restarting existing docker container for "old-k8s-version-310298" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-310298 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:47:38.057282 1085624 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:47:38.057523 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:47:38.057552 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:47:38.057578 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:47:38.057877 1085624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:47:38.058509 1085624 out.go:352] Setting JSON to false
	I1011 21:47:38.059673 1085624 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19805,"bootTime":1728663453,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 21:47:38.059787 1085624 start.go:139] virtualization:  
	I1011 21:47:38.063374 1085624 out.go:177] * [old-k8s-version-310298] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:47:38.065334 1085624 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:47:38.065394 1085624 notify.go:220] Checking for updates...
	I1011 21:47:38.070196 1085624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:47:38.072100 1085624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:47:38.073858 1085624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 21:47:38.075763 1085624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:47:38.077498 1085624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:47:38.080199 1085624 config.go:182] Loaded profile config "old-k8s-version-310298": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1011 21:47:38.083522 1085624 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1011 21:47:38.085583 1085624 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:47:38.114129 1085624 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:47:38.114429 1085624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:47:38.224764 1085624 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-11 21:47:38.206395353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:47:38.224955 1085624 docker.go:318] overlay module found
	I1011 21:47:38.227200 1085624 out.go:177] * Using the docker driver based on existing profile
	I1011 21:47:38.228788 1085624 start.go:297] selected driver: docker
	I1011 21:47:38.228806 1085624 start.go:901] validating driver "docker" against &{Name:old-k8s-version-310298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-310298 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:47:38.228926 1085624 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:47:38.229566 1085624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:47:38.296467 1085624 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-11 21:47:38.286822187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:47:38.296837 1085624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:47:38.296858 1085624 cni.go:84] Creating CNI manager for ""
	I1011 21:47:38.296923 1085624 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 21:47:38.296961 1085624 start.go:340] cluster config:
	{Name:old-k8s-version-310298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-310298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:47:38.300195 1085624 out.go:177] * Starting "old-k8s-version-310298" primary control-plane node in "old-k8s-version-310298" cluster
	I1011 21:47:38.301891 1085624 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1011 21:47:38.303718 1085624 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1011 21:47:38.305376 1085624 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1011 21:47:38.305429 1085624 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1011 21:47:38.305443 1085624 cache.go:56] Caching tarball of preloaded images
	I1011 21:47:38.305530 1085624 preload.go:172] Found /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 21:47:38.305545 1085624 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1011 21:47:38.305659 1085624 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/config.json ...
	I1011 21:47:38.305889 1085624 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 21:47:38.331034 1085624 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1011 21:47:38.331062 1085624 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1011 21:47:38.331077 1085624 cache.go:194] Successfully downloaded all kic artifacts
	I1011 21:47:38.331107 1085624 start.go:360] acquireMachinesLock for old-k8s-version-310298: {Name:mka0dc43f60ac2471c420fb70c744dfe883da000 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:47:38.331161 1085624 start.go:364] duration metric: took 32.369µs to acquireMachinesLock for "old-k8s-version-310298"
	I1011 21:47:38.331186 1085624 start.go:96] Skipping create...Using existing machine configuration
	I1011 21:47:38.331195 1085624 fix.go:54] fixHost starting: 
	I1011 21:47:38.331451 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:38.348884 1085624 fix.go:112] recreateIfNeeded on old-k8s-version-310298: state=Stopped err=<nil>
	W1011 21:47:38.348917 1085624 fix.go:138] unexpected machine state, will restart: <nil>
	I1011 21:47:38.351206 1085624 out.go:177] * Restarting existing docker container for "old-k8s-version-310298" ...
	I1011 21:47:38.352860 1085624 cli_runner.go:164] Run: docker start old-k8s-version-310298
	I1011 21:47:38.726357 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:38.750409 1085624 kic.go:430] container "old-k8s-version-310298" state is running.
	I1011 21:47:38.750792 1085624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-310298
	I1011 21:47:38.776236 1085624 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/config.json ...
	I1011 21:47:38.776463 1085624 machine.go:93] provisionDockerMachine start ...
	I1011 21:47:38.776534 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:38.800022 1085624 main.go:141] libmachine: Using SSH client type: native
	I1011 21:47:38.800298 1085624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34170 <nil> <nil>}
	I1011 21:47:38.800308 1085624 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:47:38.803011 1085624 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39462->127.0.0.1:34170: read: connection reset by peer
	I1011 21:47:41.950300 1085624 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-310298
	
	I1011 21:47:41.950373 1085624 ubuntu.go:169] provisioning hostname "old-k8s-version-310298"
	I1011 21:47:41.950484 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:41.975163 1085624 main.go:141] libmachine: Using SSH client type: native
	I1011 21:47:41.975396 1085624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34170 <nil> <nil>}
	I1011 21:47:41.975407 1085624 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-310298 && echo "old-k8s-version-310298" | sudo tee /etc/hostname
	I1011 21:47:42.194146 1085624 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-310298
	
	I1011 21:47:42.194408 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:42.221475 1085624 main.go:141] libmachine: Using SSH client type: native
	I1011 21:47:42.221762 1085624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34170 <nil> <nil>}
	I1011 21:47:42.221793 1085624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-310298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-310298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-310298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:47:42.383794 1085624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:47:42.383836 1085624 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19749-870468/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-870468/.minikube}
	I1011 21:47:42.383861 1085624 ubuntu.go:177] setting up certificates
	I1011 21:47:42.383876 1085624 provision.go:84] configureAuth start
	I1011 21:47:42.383960 1085624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-310298
	I1011 21:47:42.408397 1085624 provision.go:143] copyHostCerts
	I1011 21:47:42.408470 1085624 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem, removing ...
	I1011 21:47:42.408484 1085624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem
	I1011 21:47:42.408571 1085624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem (1078 bytes)
	I1011 21:47:42.408703 1085624 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem, removing ...
	I1011 21:47:42.408717 1085624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem
	I1011 21:47:42.408751 1085624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem (1123 bytes)
	I1011 21:47:42.408818 1085624 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem, removing ...
	I1011 21:47:42.408829 1085624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem
	I1011 21:47:42.408855 1085624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem (1675 bytes)
	I1011 21:47:42.408909 1085624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-310298 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-310298]
	I1011 21:47:43.384686 1085624 provision.go:177] copyRemoteCerts
	I1011 21:47:43.384753 1085624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:47:43.384805 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:43.408461 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:43.527748 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 21:47:43.562806 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1011 21:47:43.589120 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:47:43.629401 1085624 provision.go:87] duration metric: took 1.245504559s to configureAuth
	I1011 21:47:43.629439 1085624 ubuntu.go:193] setting minikube options for container-runtime
	I1011 21:47:43.629668 1085624 config.go:182] Loaded profile config "old-k8s-version-310298": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1011 21:47:43.629682 1085624 machine.go:96] duration metric: took 4.853210869s to provisionDockerMachine
	I1011 21:47:43.629691 1085624 start.go:293] postStartSetup for "old-k8s-version-310298" (driver="docker")
	I1011 21:47:43.629704 1085624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:47:43.629784 1085624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:47:43.629846 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:43.655985 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:43.764094 1085624 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:47:43.767923 1085624 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 21:47:43.767956 1085624 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 21:47:43.767967 1085624 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 21:47:43.767974 1085624 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1011 21:47:43.767984 1085624 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/addons for local assets ...
	I1011 21:47:43.768037 1085624 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/files for local assets ...
	I1011 21:47:43.768119 1085624 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem -> 8758612.pem in /etc/ssl/certs
	I1011 21:47:43.768219 1085624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:47:43.777781 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem --> /etc/ssl/certs/8758612.pem (1708 bytes)
	I1011 21:47:43.821623 1085624 start.go:296] duration metric: took 191.914088ms for postStartSetup
	I1011 21:47:43.821746 1085624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:47:43.821825 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:43.856034 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:43.955172 1085624 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 21:47:43.963565 1085624 fix.go:56] duration metric: took 5.632361721s for fixHost
	I1011 21:47:43.963638 1085624 start.go:83] releasing machines lock for "old-k8s-version-310298", held for 5.632463842s
	I1011 21:47:43.963744 1085624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-310298
	I1011 21:47:44.016298 1085624 ssh_runner.go:195] Run: cat /version.json
	I1011 21:47:44.016353 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:44.016606 1085624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:47:44.016667 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:44.055942 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:44.070235 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:44.354504 1085624 ssh_runner.go:195] Run: systemctl --version
	I1011 21:47:44.362967 1085624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 21:47:44.367597 1085624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1011 21:47:44.394427 1085624 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1011 21:47:44.394519 1085624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:47:44.403232 1085624 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1011 21:47:44.403260 1085624 start.go:495] detecting cgroup driver to use...
	I1011 21:47:44.403297 1085624 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1011 21:47:44.403349 1085624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1011 21:47:44.417621 1085624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 21:47:44.430379 1085624 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:47:44.430443 1085624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:47:44.444535 1085624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:47:44.456836 1085624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:47:44.588558 1085624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:47:44.734549 1085624 docker.go:233] disabling docker service ...
	I1011 21:47:44.734625 1085624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:47:44.754069 1085624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:47:44.772276 1085624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:47:44.946041 1085624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:47:45.110625 1085624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:47:45.136321 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:47:45.164635 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1011 21:47:45.178525 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 21:47:45.202841 1085624 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 21:47:45.202980 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 21:47:45.221286 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 21:47:45.240343 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 21:47:45.256694 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 21:47:45.283362 1085624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:47:45.304878 1085624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 21:47:45.330862 1085624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:47:45.343410 1085624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:47:45.360271 1085624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:47:45.521612 1085624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 21:47:45.820128 1085624 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1011 21:47:45.820214 1085624 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1011 21:47:45.824474 1085624 start.go:563] Will wait 60s for crictl version
	I1011 21:47:45.824539 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:47:45.830682 1085624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:47:45.929292 1085624 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1011 21:47:45.929378 1085624 ssh_runner.go:195] Run: containerd --version
	I1011 21:47:45.976275 1085624 ssh_runner.go:195] Run: containerd --version
	I1011 21:47:46.016794 1085624 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1011 21:47:46.019155 1085624 cli_runner.go:164] Run: docker network inspect old-k8s-version-310298 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 21:47:46.048029 1085624 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1011 21:47:46.054678 1085624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:47:46.069669 1085624 kubeadm.go:883] updating cluster {Name:old-k8s-version-310298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-310298 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:47:46.069790 1085624 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1011 21:47:46.069852 1085624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:47:46.170538 1085624 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 21:47:46.170559 1085624 containerd.go:534] Images already preloaded, skipping extraction
	I1011 21:47:46.170617 1085624 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:47:46.237118 1085624 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 21:47:46.237139 1085624 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:47:46.237146 1085624 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1011 21:47:46.237276 1085624 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-310298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-310298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:47:46.237346 1085624 ssh_runner.go:195] Run: sudo crictl info
	I1011 21:47:46.312829 1085624 cni.go:84] Creating CNI manager for ""
	I1011 21:47:46.312903 1085624 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 21:47:46.312928 1085624 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:47:46.312983 1085624 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-310298 NodeName:old-k8s-version-310298 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1011 21:47:46.313175 1085624 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-310298"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:47:46.313287 1085624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1011 21:47:46.326977 1085624 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:47:46.327099 1085624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 21:47:46.343908 1085624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1011 21:47:46.372857 1085624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:47:46.398989 1085624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1011 21:47:46.428730 1085624 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1011 21:47:46.434765 1085624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:47:46.451618 1085624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:47:46.570523 1085624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:47:46.608009 1085624 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298 for IP: 192.168.76.2
	I1011 21:47:46.608032 1085624 certs.go:194] generating shared ca certs ...
	I1011 21:47:46.608048 1085624 certs.go:226] acquiring lock for ca certs: {Name:mk314562fa38b26f30da8f33a861c5cef3708653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:47:46.608191 1085624 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key
	I1011 21:47:46.608238 1085624 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key
	I1011 21:47:46.608261 1085624 certs.go:256] generating profile certs ...
	I1011 21:47:46.608348 1085624 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.key
	I1011 21:47:46.608426 1085624 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/apiserver.key.082c2fe0
	I1011 21:47:46.608476 1085624 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/proxy-client.key
	I1011 21:47:46.608586 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861.pem (1338 bytes)
	W1011 21:47:46.608619 1085624 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861_empty.pem, impossibly tiny 0 bytes
	I1011 21:47:46.608629 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:47:46.608655 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem (1078 bytes)
	I1011 21:47:46.608682 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:47:46.608710 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem (1675 bytes)
	I1011 21:47:46.608758 1085624 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem (1708 bytes)
	I1011 21:47:46.609395 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:47:46.654275 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:47:46.717670 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:47:46.774669 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:47:46.829665 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1011 21:47:46.891630 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1011 21:47:46.927216 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:47:46.961146 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1011 21:47:47.001203 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:47:47.029996 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861.pem --> /usr/share/ca-certificates/875861.pem (1338 bytes)
	I1011 21:47:47.056806 1085624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem --> /usr/share/ca-certificates/8758612.pem (1708 bytes)
	I1011 21:47:47.083644 1085624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:47:47.102868 1085624 ssh_runner.go:195] Run: openssl version
	I1011 21:47:47.108955 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:47:47.119196 1085624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:47:47.123470 1085624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:47:47.123553 1085624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:47:47.131013 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:47:47.140701 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/875861.pem && ln -fs /usr/share/ca-certificates/875861.pem /etc/ssl/certs/875861.pem"
	I1011 21:47:47.150918 1085624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/875861.pem
	I1011 21:47:47.155096 1085624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:08 /usr/share/ca-certificates/875861.pem
	I1011 21:47:47.155169 1085624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/875861.pem
	I1011 21:47:47.162442 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/875861.pem /etc/ssl/certs/51391683.0"
	I1011 21:47:47.172352 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8758612.pem && ln -fs /usr/share/ca-certificates/8758612.pem /etc/ssl/certs/8758612.pem"
	I1011 21:47:47.182816 1085624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8758612.pem
	I1011 21:47:47.186843 1085624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:08 /usr/share/ca-certificates/8758612.pem
	I1011 21:47:47.186917 1085624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8758612.pem
	I1011 21:47:47.194019 1085624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8758612.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:47:47.204215 1085624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:47:47.208176 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1011 21:47:47.215622 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1011 21:47:47.223000 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1011 21:47:47.230250 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1011 21:47:47.237457 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1011 21:47:47.244544 1085624 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1011 21:47:47.251773 1085624 kubeadm.go:392] StartCluster: {Name:old-k8s-version-310298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-310298 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:47:47.251869 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1011 21:47:47.251941 1085624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:47:47.301693 1085624 cri.go:89] found id: "eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:47:47.301725 1085624 cri.go:89] found id: "8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:47:47.301730 1085624 cri.go:89] found id: "8348cfdc34eeae1244b17b3e5da2bb06ddd34b07c7da3d8f6cf500e0d40a385b"
	I1011 21:47:47.301734 1085624 cri.go:89] found id: "032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:47:47.301737 1085624 cri.go:89] found id: "5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:47:47.301741 1085624 cri.go:89] found id: "ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:47:47.301744 1085624 cri.go:89] found id: "5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:47:47.301748 1085624 cri.go:89] found id: "82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:47:47.301751 1085624 cri.go:89] found id: ""
	I1011 21:47:47.301802 1085624 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1011 21:47:47.315570 1085624 cri.go:116] JSON = null
	W1011 21:47:47.315620 1085624 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1011 21:47:47.315690 1085624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:47:47.329091 1085624 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1011 21:47:47.329113 1085624 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1011 21:47:47.329166 1085624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1011 21:47:47.340568 1085624 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1011 21:47:47.341015 1085624 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-310298" does not appear in /home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:47:47.341130 1085624 kubeconfig.go:62] /home/jenkins/minikube-integration/19749-870468/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-310298" cluster setting kubeconfig missing "old-k8s-version-310298" context setting]
	I1011 21:47:47.341418 1085624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/kubeconfig: {Name:mk3426a24f1490293c678ab8b1b76454f1a9ac37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:47:47.342986 1085624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1011 21:47:47.356494 1085624 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1011 21:47:47.356528 1085624 kubeadm.go:597] duration metric: took 27.408785ms to restartPrimaryControlPlane
	I1011 21:47:47.356538 1085624 kubeadm.go:394] duration metric: took 104.773678ms to StartCluster
	I1011 21:47:47.356554 1085624 settings.go:142] acquiring lock: {Name:mk7b73c41886578ea1058c5600a9d67189a81ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:47:47.356606 1085624 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:47:47.357185 1085624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/kubeconfig: {Name:mk3426a24f1490293c678ab8b1b76454f1a9ac37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:47:47.357378 1085624 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1011 21:47:47.357763 1085624 config.go:182] Loaded profile config "old-k8s-version-310298": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1011 21:47:47.357744 1085624 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1011 21:47:47.357868 1085624 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-310298"
	I1011 21:47:47.357883 1085624 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-310298"
	W1011 21:47:47.357889 1085624 addons.go:243] addon storage-provisioner should already be in state true
	I1011 21:47:47.357916 1085624 host.go:66] Checking if "old-k8s-version-310298" exists ...
	I1011 21:47:47.357906 1085624 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-310298"
	I1011 21:47:47.357946 1085624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-310298"
	I1011 21:47:47.358259 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:47.358386 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:47.358894 1085624 addons.go:69] Setting dashboard=true in profile "old-k8s-version-310298"
	I1011 21:47:47.358919 1085624 addons.go:234] Setting addon dashboard=true in "old-k8s-version-310298"
	W1011 21:47:47.358928 1085624 addons.go:243] addon dashboard should already be in state true
	I1011 21:47:47.358960 1085624 host.go:66] Checking if "old-k8s-version-310298" exists ...
	I1011 21:47:47.359462 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:47.360808 1085624 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-310298"
	I1011 21:47:47.361284 1085624 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-310298"
	W1011 21:47:47.361321 1085624 addons.go:243] addon metrics-server should already be in state true
	I1011 21:47:47.361380 1085624 host.go:66] Checking if "old-k8s-version-310298" exists ...
	I1011 21:47:47.366749 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:47.361219 1085624 out.go:177] * Verifying Kubernetes components...
	I1011 21:47:47.382550 1085624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:47:47.423277 1085624 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-310298"
	W1011 21:47:47.423299 1085624 addons.go:243] addon default-storageclass should already be in state true
	I1011 21:47:47.423325 1085624 host.go:66] Checking if "old-k8s-version-310298" exists ...
	I1011 21:47:47.423741 1085624 cli_runner.go:164] Run: docker container inspect old-k8s-version-310298 --format={{.State.Status}}
	I1011 21:47:47.434361 1085624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1011 21:47:47.436260 1085624 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1011 21:47:47.436998 1085624 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:47:47.437018 1085624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1011 21:47:47.437086 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:47.442361 1085624 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1011 21:47:47.444289 1085624 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1011 21:47:47.446537 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1011 21:47:47.446559 1085624 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1011 21:47:47.446636 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:47.447652 1085624 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1011 21:47:47.447678 1085624 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1011 21:47:47.447739 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:47.503852 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:47.504894 1085624 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1011 21:47:47.504911 1085624 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1011 21:47:47.504966 1085624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-310298
	I1011 21:47:47.523547 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:47.540413 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:47.558894 1085624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34170 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/old-k8s-version-310298/id_rsa Username:docker}
	I1011 21:47:47.605042 1085624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:47:47.662219 1085624 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-310298" to be "Ready" ...
	I1011 21:47:47.766894 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1011 21:47:47.766915 1085624 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1011 21:47:47.789280 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:47:47.814965 1085624 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1011 21:47:47.814985 1085624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1011 21:47:47.821928 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:47:47.823183 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1011 21:47:47.823233 1085624 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1011 21:47:47.891761 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1011 21:47:47.891894 1085624 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1011 21:47:47.920894 1085624 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1011 21:47:47.920962 1085624 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1011 21:47:47.953138 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1011 21:47:47.953203 1085624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1011 21:47:48.048554 1085624 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 21:47:48.048624 1085624 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1011 21:47:48.070766 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1011 21:47:48.070864 1085624 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1011 21:47:48.178040 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 21:47:48.180899 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1011 21:47:48.180963 1085624 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1011 21:47:48.216594 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.216695 1085624 retry.go:31] will retry after 321.113032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:48.222897 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.222971 1085624 retry.go:31] will retry after 221.374357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.247991 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1011 21:47:48.248071 1085624 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1011 21:47:48.309870 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1011 21:47:48.309958 1085624 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1011 21:47:48.380597 1085624 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:48.380681 1085624 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W1011 21:47:48.386164 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.386244 1085624 retry.go:31] will retry after 351.805103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.416097 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:48.445343 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:47:48.538768 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:48.560692 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.560797 1085624 retry.go:31] will retry after 181.539286ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:48.680871 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.680972 1085624 retry.go:31] will retry after 288.519416ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:48.696338 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.696421 1085624 retry.go:31] will retry after 470.079518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.738686 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 21:47:48.743521 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1011 21:47:48.896602 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.896700 1085624 retry.go:31] will retry after 464.566412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:48.930398 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.930534 1085624 retry.go:31] will retry after 542.722625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:48.969702 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1011 21:47:49.062530 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.062564 1085624 retry.go:31] will retry after 632.776867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.167432 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:49.262561 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.262597 1085624 retry.go:31] will retry after 520.241561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.361888 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1011 21:47:49.468769 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.468807 1085624 retry.go:31] will retry after 304.823745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.473996 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1011 21:47:49.575528 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.575614 1085624 retry.go:31] will retry after 482.889213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.663112 1085624 node_ready.go:53] error getting node "old-k8s-version-310298": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-310298": dial tcp 192.168.76.2:8443: connect: connection refused
	I1011 21:47:49.696407 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:47:49.773775 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 21:47:49.783200 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:49.805801 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.805911 1085624 retry.go:31] will retry after 911.776787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:49.910551 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.910712 1085624 retry.go:31] will retry after 476.749674ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:49.938465 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:49.938556 1085624 retry.go:31] will retry after 707.86824ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.058955 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1011 21:47:50.139687 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.139725 1085624 retry.go:31] will retry after 1.061414354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.387620 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1011 21:47:50.474632 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.474740 1085624 retry.go:31] will retry after 1.712670765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.646902 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:47:50.718760 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1011 21:47:50.728789 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.728824 1085624 retry.go:31] will retry after 634.432706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:50.830075 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:50.830110 1085624 retry.go:31] will retry after 1.176626942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:51.202056 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1011 21:47:51.310750 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:51.310785 1085624 retry.go:31] will retry after 686.131511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:51.363915 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:51.478037 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:51.478072 1085624 retry.go:31] will retry after 1.30084463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:51.663725 1085624 node_ready.go:53] error getting node "old-k8s-version-310298": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-310298": dial tcp 192.168.76.2:8443: connect: connection refused
	I1011 21:47:51.997891 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:52.007423 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1011 21:47:52.166459 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.166495 1085624 retry.go:31] will retry after 1.988521195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.187812 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1011 21:47:52.195726 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.195761 1085624 retry.go:31] will retry after 1.602846512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1011 21:47:52.302618 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.302662 1085624 retry.go:31] will retry after 992.68676ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.779964 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:52.858095 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:52.858130 1085624 retry.go:31] will retry after 2.851794264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:53.296337 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1011 21:47:53.378799 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:53.378832 1085624 retry.go:31] will retry after 4.011482113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:53.799804 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1011 21:47:53.885798 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:53.885830 1085624 retry.go:31] will retry after 2.669989646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:54.155260 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:54.162946 1085624 node_ready.go:53] error getting node "old-k8s-version-310298": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-310298": dial tcp 192.168.76.2:8443: connect: connection refused
	W1011 21:47:54.225089 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:54.225122 1085624 retry.go:31] will retry after 1.847094387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:55.711028 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1011 21:47:55.907962 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:55.907997 1085624 retry.go:31] will retry after 4.019442027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:56.073411 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:56.163579 1085624 node_ready.go:53] error getting node "old-k8s-version-310298": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-310298": dial tcp 192.168.76.2:8443: connect: connection refused
	W1011 21:47:56.368295 1085624 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:56.368328 1085624 retry.go:31] will retry after 3.372489498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1011 21:47:56.556639 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1011 21:47:57.391247 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1011 21:47:59.741657 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1011 21:47:59.927636 1085624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1011 21:48:06.367726 1085624 node_ready.go:49] node "old-k8s-version-310298" has status "Ready":"True"
	I1011 21:48:06.367754 1085624 node_ready.go:38] duration metric: took 18.705445695s for node "old-k8s-version-310298" to be "Ready" ...
	I1011 21:48:06.367765 1085624 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:48:06.503584 1085624 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-hvgz4" in "kube-system" namespace to be "Ready" ...
	I1011 21:48:06.695208 1085624 pod_ready.go:93] pod "coredns-74ff55c5b-hvgz4" in "kube-system" namespace has status "Ready":"True"
	I1011 21:48:06.695243 1085624 pod_ready.go:82] duration metric: took 191.611952ms for pod "coredns-74ff55c5b-hvgz4" in "kube-system" namespace to be "Ready" ...
	I1011 21:48:06.695256 1085624 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:48:07.768014 1085624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.211328377s)
	I1011 21:48:08.339665 1085624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.948370245s)
	I1011 21:48:08.339703 1085624 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-310298"
	I1011 21:48:08.577126 1085624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.835420179s)
	I1011 21:48:08.577320 1085624 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.649658126s)
	I1011 21:48:08.579543 1085624 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-310298 addons enable metrics-server
	
	I1011 21:48:08.581690 1085624 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1011 21:48:08.584069 1085624 addons.go:510] duration metric: took 21.226323646s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1011 21:48:08.708855 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:11.201862 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:13.701776 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:15.703365 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:18.207442 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:20.701519 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:23.202447 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:25.701298 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:27.704068 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:30.202855 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:32.204559 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:34.702926 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:36.706026 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:39.204619 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:41.701637 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:43.701851 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:45.702454 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:48.201742 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:50.202227 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:52.702348 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:55.202313 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:48:57.702253 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:00.206718 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:02.702349 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:04.703305 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:07.202061 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:09.202919 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:11.702818 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:14.201385 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:16.201955 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:18.701731 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:20.702564 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:23.201652 1085624 pod_ready.go:103] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:25.702877 1085624 pod_ready.go:93] pod "etcd-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"True"
	I1011 21:49:25.702905 1085624 pod_ready.go:82] duration metric: took 1m19.007640506s for pod "etcd-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.702951 1085624 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.712036 1085624 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"True"
	I1011 21:49:25.712065 1085624 pod_ready.go:82] duration metric: took 9.055176ms for pod "kube-apiserver-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.712079 1085624 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.717418 1085624 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"True"
	I1011 21:49:25.717444 1085624 pod_ready.go:82] duration metric: took 5.357578ms for pod "kube-controller-manager-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.717458 1085624 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6nvx" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.722541 1085624 pod_ready.go:93] pod "kube-proxy-h6nvx" in "kube-system" namespace has status "Ready":"True"
	I1011 21:49:25.722568 1085624 pod_ready.go:82] duration metric: took 5.102646ms for pod "kube-proxy-h6nvx" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:25.722578 1085624 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:27.728679 1085624 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:29.729206 1085624 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:31.729300 1085624 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:33.732071 1085624 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:34.730056 1085624 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace has status "Ready":"True"
	I1011 21:49:34.730083 1085624 pod_ready.go:82] duration metric: took 9.007496517s for pod "kube-scheduler-old-k8s-version-310298" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:34.730096 1085624 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace to be "Ready" ...
	I1011 21:49:36.736315 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:39.236164 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:41.736810 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:44.236405 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:46.737597 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:49.236233 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:51.236564 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:53.736385 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:56.236524 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:49:58.237548 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:00.273342 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:02.736068 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:04.736546 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:06.737169 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:09.235606 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:11.236312 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:13.735857 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:15.736808 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:18.239441 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:20.736755 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:23.235901 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:25.243915 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:27.338118 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:29.736959 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:32.237367 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:34.737026 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:36.738744 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:39.236258 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:41.237279 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:43.242769 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:45.736831 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:48.236217 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:50.735792 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:52.736347 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:54.737545 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:57.236285 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:50:59.736379 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:02.236447 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:04.737040 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:07.236029 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:09.236333 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:11.736386 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:14.235920 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:16.736245 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:18.736484 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:20.737171 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:23.236776 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:25.736092 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:27.736773 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:29.737133 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:32.236500 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:34.736719 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:36.737335 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:39.236268 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:41.737339 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:44.235951 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:46.236527 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:48.736344 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:51.235829 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:53.236414 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:55.236990 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:57.736823 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:51:59.737001 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:02.236132 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:04.736178 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:07.236395 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:09.736275 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:11.746575 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:14.236172 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:16.736481 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:18.736722 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:21.236737 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:23.736430 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:25.737096 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:27.737177 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:30.236552 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:32.743919 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:35.236181 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:37.236979 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:39.736284 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:41.736756 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:44.237476 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:46.735768 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:48.736768 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:51.236144 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:53.236646 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:55.237228 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:52:57.737179 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:00.238030 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:02.736724 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:04.738374 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:07.236342 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:09.237822 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:11.736487 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:13.736719 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:16.237443 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:18.735757 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:20.736317 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:22.736363 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:24.736781 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:27.236573 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:29.236872 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:31.237230 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:33.238667 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:34.738636 1085624 pod_ready.go:82] duration metric: took 4m0.008524527s for pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace to be "Ready" ...
	E1011 21:53:34.738661 1085624 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 21:53:34.738671 1085624 pod_ready.go:39] duration metric: took 5m28.370895394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:53:34.738687 1085624 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:53:34.738717 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:53:34.738782 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:53:34.825090 1085624 cri.go:89] found id: "9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:34.825113 1085624 cri.go:89] found id: "5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:34.825117 1085624 cri.go:89] found id: ""
	I1011 21:53:34.825125 1085624 logs.go:282] 2 containers: [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b]
	I1011 21:53:34.825187 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.829431 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.833120 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1011 21:53:34.833195 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:53:34.931287 1085624 cri.go:89] found id: "9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:34.931307 1085624 cri.go:89] found id: "82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:34.931312 1085624 cri.go:89] found id: ""
	I1011 21:53:34.931320 1085624 logs.go:282] 2 containers: [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab]
	I1011 21:53:34.931374 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.935119 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.939281 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1011 21:53:34.939365 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:53:34.998718 1085624 cri.go:89] found id: "cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:34.998741 1085624 cri.go:89] found id: "eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:34.998747 1085624 cri.go:89] found id: ""
	I1011 21:53:34.998755 1085624 logs.go:282] 2 containers: [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848]
	I1011 21:53:34.998810 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.005603 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.019425 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:53:35.019509 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:53:35.106941 1085624 cri.go:89] found id: "407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:35.106965 1085624 cri.go:89] found id: "5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:35.106972 1085624 cri.go:89] found id: ""
	I1011 21:53:35.106980 1085624 logs.go:282] 2 containers: [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7]
	I1011 21:53:35.107033 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.111057 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.114945 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:53:35.115039 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:53:35.205933 1085624 cri.go:89] found id: "bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:35.205958 1085624 cri.go:89] found id: "032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:35.205964 1085624 cri.go:89] found id: ""
	I1011 21:53:35.205972 1085624 logs.go:282] 2 containers: [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3]
	I1011 21:53:35.206027 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.211730 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.216548 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:53:35.216618 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:53:35.263659 1085624 cri.go:89] found id: "be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:35.263681 1085624 cri.go:89] found id: "ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:35.263687 1085624 cri.go:89] found id: ""
	I1011 21:53:35.263697 1085624 logs.go:282] 2 containers: [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb]
	I1011 21:53:35.263749 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.267382 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.270717 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1011 21:53:35.270784 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:53:35.326779 1085624 cri.go:89] found id: "eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:35.326803 1085624 cri.go:89] found id: "8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:35.326809 1085624 cri.go:89] found id: ""
	I1011 21:53:35.326816 1085624 logs.go:282] 2 containers: [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e]
	I1011 21:53:35.326870 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.331050 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.335186 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 21:53:35.335263 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 21:53:35.386234 1085624 cri.go:89] found id: "35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:35.386257 1085624 cri.go:89] found id: ""
	I1011 21:53:35.386299 1085624 logs.go:282] 1 containers: [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3]
	I1011 21:53:35.386354 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.390098 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1011 21:53:35.390174 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1011 21:53:35.437777 1085624 cri.go:89] found id: "2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:35.437802 1085624 cri.go:89] found id: "eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:35.437807 1085624 cri.go:89] found id: ""
	I1011 21:53:35.437815 1085624 logs.go:282] 2 containers: [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8]
	I1011 21:53:35.437870 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.442836 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.446492 1085624 logs.go:123] Gathering logs for kindnet [8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e] ...
	I1011 21:53:35.446518 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:35.496118 1085624 logs.go:123] Gathering logs for containerd ...
	I1011 21:53:35.496148 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1011 21:53:35.575504 1085624 logs.go:123] Gathering logs for etcd [82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab] ...
	I1011 21:53:35.575546 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:35.635236 1085624 logs.go:123] Gathering logs for coredns [eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848] ...
	I1011 21:53:35.635267 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:35.680667 1085624 logs.go:123] Gathering logs for kube-scheduler [5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7] ...
	I1011 21:53:35.680697 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:35.736511 1085624 logs.go:123] Gathering logs for kube-controller-manager [ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb] ...
	I1011 21:53:35.736543 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:35.817611 1085624 logs.go:123] Gathering logs for storage-provisioner [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480] ...
	I1011 21:53:35.817642 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:35.886476 1085624 logs.go:123] Gathering logs for container status ...
	I1011 21:53:35.886505 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:53:35.944581 1085624 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:53:35.944611 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:53:36.129491 1085624 logs.go:123] Gathering logs for etcd [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1] ...
	I1011 21:53:36.129525 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:36.186921 1085624 logs.go:123] Gathering logs for coredns [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb] ...
	I1011 21:53:36.186951 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:36.240289 1085624 logs.go:123] Gathering logs for kubernetes-dashboard [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3] ...
	I1011 21:53:36.240319 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:36.295024 1085624 logs.go:123] Gathering logs for dmesg ...
	I1011 21:53:36.295056 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:53:36.311832 1085624 logs.go:123] Gathering logs for kube-proxy [032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3] ...
	I1011 21:53:36.311858 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:36.365599 1085624 logs.go:123] Gathering logs for kindnet [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e] ...
	I1011 21:53:36.365630 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:36.432040 1085624 logs.go:123] Gathering logs for kube-scheduler [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466] ...
	I1011 21:53:36.432125 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:36.481093 1085624 logs.go:123] Gathering logs for kube-proxy [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31] ...
	I1011 21:53:36.481168 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:36.528111 1085624 logs.go:123] Gathering logs for kube-controller-manager [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad] ...
	I1011 21:53:36.528197 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:36.601710 1085624 logs.go:123] Gathering logs for storage-provisioner [eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8] ...
	I1011 21:53:36.601784 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:36.644577 1085624 logs.go:123] Gathering logs for kubelet ...
	I1011 21:53:36.644650 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:53:36.704541 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:06 old-k8s-version-310298 kubelet[666]: E1011 21:48:06.555132     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-310298" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-310298' and this object
	W1011 21:53:36.708245 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:07 old-k8s-version-310298 kubelet[666]: E1011 21:48:07.898796     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.708442 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:08 old-k8s-version-310298 kubelet[666]: E1011 21:48:08.734852     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.711555 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:22 old-k8s-version-310298 kubelet[666]: E1011 21:48:22.357549     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.713401 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.350204     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.713733 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.842524     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.714598 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:35 old-k8s-version-310298 kubelet[666]: E1011 21:48:35.846225     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.714931 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.052714     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.715374 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.856716     666 pod_workers.go:191] Error syncing pod 90e529b8-25e8-41b1-9f66-bfd3ec245bf3 ("storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"
	W1011 21:53:36.718188 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:47 old-k8s-version-310298 kubelet[666]: E1011 21:48:47.365069     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.720517 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:51 old-k8s-version-310298 kubelet[666]: E1011 21:48:51.891977     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.720883 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:58 old-k8s-version-310298 kubelet[666]: E1011 21:48:58.053122     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.721072 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:00 old-k8s-version-310298 kubelet[666]: E1011 21:49:00.350678     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.721405 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:10 old-k8s-version-310298 kubelet[666]: E1011 21:49:10.350114     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.721596 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:13 old-k8s-version-310298 kubelet[666]: E1011 21:49:13.351545     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.722333 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:25 old-k8s-version-310298 kubelet[666]: E1011 21:49:25.986763     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.722540 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:27 old-k8s-version-310298 kubelet[666]: E1011 21:49:27.349958     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.722965 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:28 old-k8s-version-310298 kubelet[666]: E1011 21:49:28.052147     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.725465 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:38 old-k8s-version-310298 kubelet[666]: E1011 21:49:38.361564     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.725800 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:39 old-k8s-version-310298 kubelet[666]: E1011 21:49:39.350464     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.725988 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:49 old-k8s-version-310298 kubelet[666]: E1011 21:49:49.350331     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.726339 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:52 old-k8s-version-310298 kubelet[666]: E1011 21:49:52.349607     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.726524 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:02 old-k8s-version-310298 kubelet[666]: E1011 21:50:02.350044     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.726867 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:03 old-k8s-version-310298 kubelet[666]: E1011 21:50:03.353309     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.727051 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:14 old-k8s-version-310298 kubelet[666]: E1011 21:50:14.349894     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.727646 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:16 old-k8s-version-310298 kubelet[666]: E1011 21:50:16.174777     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.727979 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:18 old-k8s-version-310298 kubelet[666]: E1011 21:50:18.052598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.728163 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:25 old-k8s-version-310298 kubelet[666]: E1011 21:50:25.351630     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.728494 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:32 old-k8s-version-310298 kubelet[666]: E1011 21:50:32.349548     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.728681 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:38 old-k8s-version-310298 kubelet[666]: E1011 21:50:38.350026     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.729010 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:44 old-k8s-version-310298 kubelet[666]: E1011 21:50:44.349518     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.729195 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:50 old-k8s-version-310298 kubelet[666]: E1011 21:50:50.350344     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.729527 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:57 old-k8s-version-310298 kubelet[666]: E1011 21:50:57.350003     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.732194 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:03 old-k8s-version-310298 kubelet[666]: E1011 21:51:03.362364     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.732632 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:10 old-k8s-version-310298 kubelet[666]: E1011 21:51:10.349716     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.732836 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:14 old-k8s-version-310298 kubelet[666]: E1011 21:51:14.349911     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.733191 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:22 old-k8s-version-310298 kubelet[666]: E1011 21:51:22.349634     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.733391 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:25 old-k8s-version-310298 kubelet[666]: E1011 21:51:25.350570     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.734045 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:38 old-k8s-version-310298 kubelet[666]: E1011 21:51:38.414410     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.734241 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:40 old-k8s-version-310298 kubelet[666]: E1011 21:51:40.350770     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.734639 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:48 old-k8s-version-310298 kubelet[666]: E1011 21:51:48.052903     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.734874 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:52 old-k8s-version-310298 kubelet[666]: E1011 21:51:52.349920     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.735251 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:00 old-k8s-version-310298 kubelet[666]: E1011 21:52:00.354871     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.735538 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:06 old-k8s-version-310298 kubelet[666]: E1011 21:52:06.350036     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.735906 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:12 old-k8s-version-310298 kubelet[666]: E1011 21:52:12.349554     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.736125 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:21 old-k8s-version-310298 kubelet[666]: E1011 21:52:21.350139     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.736560 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:27 old-k8s-version-310298 kubelet[666]: E1011 21:52:27.349598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.736792 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:32 old-k8s-version-310298 kubelet[666]: E1011 21:52:32.349985     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.737162 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: E1011 21:52:38.349563     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.737379 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:43 old-k8s-version-310298 kubelet[666]: E1011 21:52:43.350681     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.737757 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: E1011 21:52:52.349495     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.737974 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:54 old-k8s-version-310298 kubelet[666]: E1011 21:52:54.349999     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.738333 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: E1011 21:53:04.349825     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.738534 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.738963 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.739154 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.739364 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.739719 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:36.739742 1085624 logs.go:123] Gathering logs for kube-apiserver [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094] ...
	I1011 21:53:36.739760 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:36.834629 1085624 logs.go:123] Gathering logs for kube-apiserver [5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b] ...
	I1011 21:53:36.834706 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:36.904644 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:36.904674 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:53:36.904730 1085624 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 21:53:36.904744 1085624 out.go:270]   Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904753 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	  Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.904768 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904774 1085624 out.go:270]   Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904780 1085624 out.go:270]   Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	  Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:36.904803 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:36.904815 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:46.905745 1085624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:53:46.917698 1085624 api_server.go:72] duration metric: took 5m59.560280279s to wait for apiserver process to appear ...
	I1011 21:53:46.917738 1085624 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:53:46.917784 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:53:46.917856 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:53:46.967727 1085624 cri.go:89] found id: "9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:46.967753 1085624 cri.go:89] found id: "5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:46.967764 1085624 cri.go:89] found id: ""
	I1011 21:53:46.967772 1085624 logs.go:282] 2 containers: [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b]
	I1011 21:53:46.967828 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:46.971860 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:46.975332 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1011 21:53:46.975399 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:53:47.030587 1085624 cri.go:89] found id: "9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:47.030620 1085624 cri.go:89] found id: "82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:47.030627 1085624 cri.go:89] found id: ""
	I1011 21:53:47.030634 1085624 logs.go:282] 2 containers: [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab]
	I1011 21:53:47.030701 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.037536 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.041167 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1011 21:53:47.041292 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:53:47.088464 1085624 cri.go:89] found id: "cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:47.088496 1085624 cri.go:89] found id: "eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:47.088502 1085624 cri.go:89] found id: ""
	I1011 21:53:47.088510 1085624 logs.go:282] 2 containers: [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848]
	I1011 21:53:47.088584 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.092201 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.095599 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:53:47.095668 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:53:47.163423 1085624 cri.go:89] found id: "407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:47.163496 1085624 cri.go:89] found id: "5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:47.163516 1085624 cri.go:89] found id: ""
	I1011 21:53:47.163537 1085624 logs.go:282] 2 containers: [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7]
	I1011 21:53:47.163621 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.167536 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.171184 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:53:47.171300 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:53:47.217944 1085624 cri.go:89] found id: "bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:47.218018 1085624 cri.go:89] found id: "032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:47.218037 1085624 cri.go:89] found id: ""
	I1011 21:53:47.218059 1085624 logs.go:282] 2 containers: [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3]
	I1011 21:53:47.218139 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.221936 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.225534 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:53:47.225649 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:53:47.305085 1085624 cri.go:89] found id: "be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:47.305157 1085624 cri.go:89] found id: "ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:47.305179 1085624 cri.go:89] found id: ""
	I1011 21:53:47.305205 1085624 logs.go:282] 2 containers: [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb]
	I1011 21:53:47.305298 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.309800 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.313696 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1011 21:53:47.313816 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:53:47.367189 1085624 cri.go:89] found id: "eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:47.367261 1085624 cri.go:89] found id: "8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:47.367281 1085624 cri.go:89] found id: ""
	I1011 21:53:47.367307 1085624 logs.go:282] 2 containers: [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e]
	I1011 21:53:47.367399 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.371197 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.374814 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 21:53:47.374923 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 21:53:47.427202 1085624 cri.go:89] found id: "35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:47.427272 1085624 cri.go:89] found id: ""
	I1011 21:53:47.427295 1085624 logs.go:282] 1 containers: [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3]
	I1011 21:53:47.427383 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.431104 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1011 21:53:47.431220 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1011 21:53:47.479983 1085624 cri.go:89] found id: "2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:47.480057 1085624 cri.go:89] found id: "eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:47.480077 1085624 cri.go:89] found id: ""
	I1011 21:53:47.480103 1085624 logs.go:282] 2 containers: [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8]
	I1011 21:53:47.480192 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.483906 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.487547 1085624 logs.go:123] Gathering logs for etcd [82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab] ...
	I1011 21:53:47.487608 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:47.540211 1085624 logs.go:123] Gathering logs for kube-controller-manager [ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb] ...
	I1011 21:53:47.540289 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:47.613759 1085624 logs.go:123] Gathering logs for kubelet ...
	I1011 21:53:47.613836 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:53:47.721084 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:06 old-k8s-version-310298 kubelet[666]: E1011 21:48:06.555132     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-310298" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-310298' and this object
	W1011 21:53:47.724782 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:07 old-k8s-version-310298 kubelet[666]: E1011 21:48:07.898796     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.725034 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:08 old-k8s-version-310298 kubelet[666]: E1011 21:48:08.734852     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.730739 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:22 old-k8s-version-310298 kubelet[666]: E1011 21:48:22.357549     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.732638 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.350204     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.733019 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.842524     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.733834 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:35 old-k8s-version-310298 kubelet[666]: E1011 21:48:35.846225     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.734191 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.052714     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.734680 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.856716     666 pod_workers.go:191] Error syncing pod 90e529b8-25e8-41b1-9f66-bfd3ec245bf3 ("storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"
	W1011 21:53:47.737472 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:47 old-k8s-version-310298 kubelet[666]: E1011 21:48:47.365069     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.738216 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:51 old-k8s-version-310298 kubelet[666]: E1011 21:48:51.891977     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.738581 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:58 old-k8s-version-310298 kubelet[666]: E1011 21:48:58.053122     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.738795 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:00 old-k8s-version-310298 kubelet[666]: E1011 21:49:00.350678     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.739153 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:10 old-k8s-version-310298 kubelet[666]: E1011 21:49:10.350114     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.739369 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:13 old-k8s-version-310298 kubelet[666]: E1011 21:49:13.351545     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.740033 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:25 old-k8s-version-310298 kubelet[666]: E1011 21:49:25.986763     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.740248 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:27 old-k8s-version-310298 kubelet[666]: E1011 21:49:27.349958     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.740605 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:28 old-k8s-version-310298 kubelet[666]: E1011 21:49:28.052147     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.743293 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:38 old-k8s-version-310298 kubelet[666]: E1011 21:49:38.361564     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.743630 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:39 old-k8s-version-310298 kubelet[666]: E1011 21:49:39.350464     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.743812 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:49 old-k8s-version-310298 kubelet[666]: E1011 21:49:49.350331     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.744136 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:52 old-k8s-version-310298 kubelet[666]: E1011 21:49:52.349607     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.744316 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:02 old-k8s-version-310298 kubelet[666]: E1011 21:50:02.350044     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.744646 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:03 old-k8s-version-310298 kubelet[666]: E1011 21:50:03.353309     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.744826 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:14 old-k8s-version-310298 kubelet[666]: E1011 21:50:14.349894     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.745409 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:16 old-k8s-version-310298 kubelet[666]: E1011 21:50:16.174777     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.745736 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:18 old-k8s-version-310298 kubelet[666]: E1011 21:50:18.052598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.745914 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:25 old-k8s-version-310298 kubelet[666]: E1011 21:50:25.351630     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.746239 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:32 old-k8s-version-310298 kubelet[666]: E1011 21:50:32.349548     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.746549 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:38 old-k8s-version-310298 kubelet[666]: E1011 21:50:38.350026     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.746932 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:44 old-k8s-version-310298 kubelet[666]: E1011 21:50:44.349518     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.747144 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:50 old-k8s-version-310298 kubelet[666]: E1011 21:50:50.350344     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.747503 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:57 old-k8s-version-310298 kubelet[666]: E1011 21:50:57.350003     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.749960 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:03 old-k8s-version-310298 kubelet[666]: E1011 21:51:03.362364     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.750360 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:10 old-k8s-version-310298 kubelet[666]: E1011 21:51:10.349716     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.750584 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:14 old-k8s-version-310298 kubelet[666]: E1011 21:51:14.349911     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.750937 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:22 old-k8s-version-310298 kubelet[666]: E1011 21:51:22.349634     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.751150 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:25 old-k8s-version-310298 kubelet[666]: E1011 21:51:25.350570     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.751764 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:38 old-k8s-version-310298 kubelet[666]: E1011 21:51:38.414410     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.751978 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:40 old-k8s-version-310298 kubelet[666]: E1011 21:51:40.350770     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.752334 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:48 old-k8s-version-310298 kubelet[666]: E1011 21:51:48.052903     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.752546 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:52 old-k8s-version-310298 kubelet[666]: E1011 21:51:52.349920     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.752902 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:00 old-k8s-version-310298 kubelet[666]: E1011 21:52:00.354871     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.753114 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:06 old-k8s-version-310298 kubelet[666]: E1011 21:52:06.350036     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.753471 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:12 old-k8s-version-310298 kubelet[666]: E1011 21:52:12.349554     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.753719 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:21 old-k8s-version-310298 kubelet[666]: E1011 21:52:21.350139     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.754079 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:27 old-k8s-version-310298 kubelet[666]: E1011 21:52:27.349598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.754308 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:32 old-k8s-version-310298 kubelet[666]: E1011 21:52:32.349985     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.754669 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: E1011 21:52:38.349563     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.754878 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:43 old-k8s-version-310298 kubelet[666]: E1011 21:52:43.350681     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.755237 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: E1011 21:52:52.349495     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.755451 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:54 old-k8s-version-310298 kubelet[666]: E1011 21:52:54.349999     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.755814 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: E1011 21:53:04.349825     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.756026 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.756386 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.756697 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.756908 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.757269 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.757484 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.757846 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:47.757876 1085624 logs.go:123] Gathering logs for etcd [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1] ...
	I1011 21:53:47.757905 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:47.841811 1085624 logs.go:123] Gathering logs for coredns [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb] ...
	I1011 21:53:47.841892 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:47.899566 1085624 logs.go:123] Gathering logs for kube-proxy [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31] ...
	I1011 21:53:47.899597 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:47.956134 1085624 logs.go:123] Gathering logs for kubernetes-dashboard [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3] ...
	I1011 21:53:47.956171 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:48.035155 1085624 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:53:48.035193 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:53:48.244159 1085624 logs.go:123] Gathering logs for kube-apiserver [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094] ...
	I1011 21:53:48.244190 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:48.344263 1085624 logs.go:123] Gathering logs for kube-scheduler [5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7] ...
	I1011 21:53:48.344345 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:48.417246 1085624 logs.go:123] Gathering logs for kube-proxy [032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3] ...
	I1011 21:53:48.417279 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:48.484287 1085624 logs.go:123] Gathering logs for kube-controller-manager [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad] ...
	I1011 21:53:48.484319 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:48.575776 1085624 logs.go:123] Gathering logs for kindnet [8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e] ...
	I1011 21:53:48.575813 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:48.626351 1085624 logs.go:123] Gathering logs for storage-provisioner [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480] ...
	I1011 21:53:48.626380 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:48.677516 1085624 logs.go:123] Gathering logs for storage-provisioner [eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8] ...
	I1011 21:53:48.677544 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:48.734382 1085624 logs.go:123] Gathering logs for kube-apiserver [5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b] ...
	I1011 21:53:48.734417 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:48.839627 1085624 logs.go:123] Gathering logs for coredns [eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848] ...
	I1011 21:53:48.839663 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:48.904674 1085624 logs.go:123] Gathering logs for containerd ...
	I1011 21:53:48.904702 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1011 21:53:48.975439 1085624 logs.go:123] Gathering logs for kindnet [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e] ...
	I1011 21:53:48.975478 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:49.044532 1085624 logs.go:123] Gathering logs for container status ...
	I1011 21:53:49.044609 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:53:49.167140 1085624 logs.go:123] Gathering logs for dmesg ...
	I1011 21:53:49.167219 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:53:49.225272 1085624 logs.go:123] Gathering logs for kube-scheduler [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466] ...
	I1011 21:53:49.225302 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:49.332611 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:49.332684 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:53:49.332768 1085624 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1011 21:53:49.332812 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332866 1085624 out.go:270]   Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332902 1085624 out.go:270]   Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	  Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:49.332954 1085624 out.go:270]   Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332986 1085624 out.go:270]   Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	  Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:49.333037 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:49.333059 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:59.333701 1085624 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1011 21:53:59.469048 1085624 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1011 21:53:59.471215 1085624 out.go:201] 
	W1011 21:53:59.473010 1085624 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1011 21:53:59.473050 1085624 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1011 21:53:59.473069 1085624 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1011 21:53:59.473075 1085624 out.go:270] * 
	* 
	W1011 21:53:59.474037 1085624 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 21:53:59.475339 1085624 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-310298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-310298
helpers_test.go:235: (dbg) docker inspect old-k8s-version-310298:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070",
	        "Created": "2024-10-11T21:44:45.915597814Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1085823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-11T21:47:38.514750254Z",
	            "FinishedAt": "2024-10-11T21:47:37.431746699Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070/hostname",
	        "HostsPath": "/var/lib/docker/containers/215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070/hosts",
	        "LogPath": "/var/lib/docker/containers/215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070/215a30a20429c21b3787a5427e4e6cd0d1ade8488793e077f3c1c1aca8e13070-json.log",
	        "Name": "/old-k8s-version-310298",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-310298:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-310298",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f30f36fef2bd146760e30edde572374166ed34fac6ff05f6a213b97cbdea255d-init/diff:/var/lib/docker/overlay2/64a038944358d2428e67305d9f97679b9a377ef43ac638d6a777391fae594f13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f30f36fef2bd146760e30edde572374166ed34fac6ff05f6a213b97cbdea255d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f30f36fef2bd146760e30edde572374166ed34fac6ff05f6a213b97cbdea255d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f30f36fef2bd146760e30edde572374166ed34fac6ff05f6a213b97cbdea255d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-310298",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-310298/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-310298",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-310298",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-310298",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fac5bf278ce2764bb2938db994f403c1d295b371f82879af9d09d55206fe9bc2",
	            "SandboxKey": "/var/run/docker/netns/fac5bf278ce2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34170"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34171"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34172"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34173"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-310298": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bf142940a76055dc06df5ff580bc3cfbb34c5807512f252c4f3b6e4afd378636",
	                    "EndpointID": "711d72431a5b1276f0daa076f38c725313b1f535a1db01dea574f91bc8b3ba27",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-310298",
	                        "215a30a20429"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-310298 -n old-k8s-version-310298
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-310298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-310298 logs -n 25: (2.748651111s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-428232                              | cert-expiration-428232   | jenkins | v1.34.0 | 11 Oct 24 21:43 UTC | 11 Oct 24 21:44 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-429719                               | force-systemd-env-429719 | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-429719                            | force-systemd-env-429719 | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	| start   | -p cert-options-549123                                 | cert-options-549123      | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-549123 ssh                                | cert-options-549123      | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-549123 -- sudo                         | cert-options-549123      | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-549123                                 | cert-options-549123      | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:44 UTC |
	| start   | -p old-k8s-version-310298                              | old-k8s-version-310298   | jenkins | v1.34.0 | 11 Oct 24 21:44 UTC | 11 Oct 24 21:47 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-428232                              | cert-expiration-428232   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:47 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-428232                              | cert-expiration-428232   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:47 UTC |
	| start   | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:48 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-310298        | old-k8s-version-310298   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-310298                              | old-k8s-version-310298   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:47 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-310298             | old-k8s-version-310298   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC | 11 Oct 24 21:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-310298                              | old-k8s-version-310298   | jenkins | v1.34.0 | 11 Oct 24 21:47 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-359490             | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:48 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:48 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-359490                  | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:48 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:48 UTC | 11 Oct 24 21:53 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-359490 image list                           | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC | 11 Oct 24 21:53 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC | 11 Oct 24 21:53 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC | 11 Oct 24 21:53 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC | 11 Oct 24 21:53 UTC |
	| delete  | -p no-preload-359490                                   | no-preload-359490        | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC | 11 Oct 24 21:53 UTC |
	| start   | -p embed-certs-159135                                  | embed-certs-159135       | jenkins | v1.34.0 | 11 Oct 24 21:53 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 21:53:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 21:53:33.978161 1096229 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:53:33.978400 1096229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:33.978429 1096229 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:33.978449 1096229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:33.978721 1096229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:53:33.979175 1096229 out.go:352] Setting JSON to false
	I1011 21:53:33.980236 1096229 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":20161,"bootTime":1728663453,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 21:53:33.980335 1096229 start.go:139] virtualization:  
	I1011 21:53:33.982904 1096229 out.go:177] * [embed-certs-159135] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:53:33.986131 1096229 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:53:33.986185 1096229 notify.go:220] Checking for updates...
	I1011 21:53:33.991021 1096229 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:53:33.992819 1096229 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:53:33.994481 1096229 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 21:53:33.996337 1096229 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:53:33.998193 1096229 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:53:34.000370 1096229 config.go:182] Loaded profile config "old-k8s-version-310298": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1011 21:53:34.000550 1096229 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:53:34.028655 1096229 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:53:34.028884 1096229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:53:34.083512 1096229 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:53:34.073390876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:53:34.083630 1096229 docker.go:318] overlay module found
	I1011 21:53:34.086712 1096229 out.go:177] * Using the docker driver based on user configuration
	I1011 21:53:34.088411 1096229 start.go:297] selected driver: docker
	I1011 21:53:34.088433 1096229 start.go:901] validating driver "docker" against <nil>
	I1011 21:53:34.088450 1096229 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:53:34.089161 1096229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:53:34.144140 1096229 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:53:34.134594675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:53:34.144365 1096229 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 21:53:34.144603 1096229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1011 21:53:34.146689 1096229 out.go:177] * Using Docker driver with root privileges
	I1011 21:53:34.148533 1096229 cni.go:84] Creating CNI manager for ""
	I1011 21:53:34.148597 1096229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 21:53:34.148613 1096229 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 21:53:34.148690 1096229 start.go:340] cluster config:
	{Name:embed-certs-159135 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-159135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:53:34.150426 1096229 out.go:177] * Starting "embed-certs-159135" primary control-plane node in "embed-certs-159135" cluster
	I1011 21:53:34.152292 1096229 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1011 21:53:34.154006 1096229 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1011 21:53:34.155754 1096229 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 21:53:34.155809 1096229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1011 21:53:34.155821 1096229 cache.go:56] Caching tarball of preloaded images
	I1011 21:53:34.155876 1096229 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 21:53:34.155905 1096229 preload.go:172] Found /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1011 21:53:34.155915 1096229 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1011 21:53:34.156024 1096229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/config.json ...
	I1011 21:53:34.156040 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/config.json: {Name:mke0d62509b891644bc8d220daa08f407128d9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:34.181405 1096229 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1011 21:53:34.181429 1096229 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1011 21:53:34.181450 1096229 cache.go:194] Successfully downloaded all kic artifacts
	I1011 21:53:34.181487 1096229 start.go:360] acquireMachinesLock for embed-certs-159135: {Name:mkaf1dfd7982f2b45b4cf4127ca465fed2b83e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1011 21:53:34.182089 1096229 start.go:364] duration metric: took 579.386µs to acquireMachinesLock for "embed-certs-159135"
	I1011 21:53:34.182127 1096229 start.go:93] Provisioning new machine with config: &{Name:embed-certs-159135 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-159135 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1011 21:53:34.182210 1096229 start.go:125] createHost starting for "" (driver="docker")
	I1011 21:53:33.238667 1085624 pod_ready.go:103] pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace has status "Ready":"False"
	I1011 21:53:34.738636 1085624 pod_ready.go:82] duration metric: took 4m0.008524527s for pod "metrics-server-9975d5f86-mv42d" in "kube-system" namespace to be "Ready" ...
	E1011 21:53:34.738661 1085624 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1011 21:53:34.738671 1085624 pod_ready.go:39] duration metric: took 5m28.370895394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1011 21:53:34.738687 1085624 api_server.go:52] waiting for apiserver process to appear ...
	I1011 21:53:34.738717 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:53:34.738782 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:53:34.825090 1085624 cri.go:89] found id: "9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:34.825113 1085624 cri.go:89] found id: "5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:34.825117 1085624 cri.go:89] found id: ""
	I1011 21:53:34.825125 1085624 logs.go:282] 2 containers: [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b]
	I1011 21:53:34.825187 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.829431 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.833120 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1011 21:53:34.833195 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:53:34.931287 1085624 cri.go:89] found id: "9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:34.931307 1085624 cri.go:89] found id: "82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:34.931312 1085624 cri.go:89] found id: ""
	I1011 21:53:34.931320 1085624 logs.go:282] 2 containers: [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab]
	I1011 21:53:34.931374 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.935119 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:34.939281 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1011 21:53:34.939365 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:53:34.998718 1085624 cri.go:89] found id: "cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:34.998741 1085624 cri.go:89] found id: "eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:34.998747 1085624 cri.go:89] found id: ""
	I1011 21:53:34.998755 1085624 logs.go:282] 2 containers: [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848]
	I1011 21:53:34.998810 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.005603 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.019425 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:53:35.019509 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:53:35.106941 1085624 cri.go:89] found id: "407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:35.106965 1085624 cri.go:89] found id: "5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:35.106972 1085624 cri.go:89] found id: ""
	I1011 21:53:35.106980 1085624 logs.go:282] 2 containers: [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7]
	I1011 21:53:35.107033 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.111057 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.114945 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:53:35.115039 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:53:35.205933 1085624 cri.go:89] found id: "bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:35.205958 1085624 cri.go:89] found id: "032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:35.205964 1085624 cri.go:89] found id: ""
	I1011 21:53:35.205972 1085624 logs.go:282] 2 containers: [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3]
	I1011 21:53:35.206027 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.211730 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.216548 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:53:35.216618 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:53:35.263659 1085624 cri.go:89] found id: "be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:35.263681 1085624 cri.go:89] found id: "ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:35.263687 1085624 cri.go:89] found id: ""
	I1011 21:53:35.263697 1085624 logs.go:282] 2 containers: [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb]
	I1011 21:53:35.263749 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.267382 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.270717 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1011 21:53:35.270784 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:53:35.326779 1085624 cri.go:89] found id: "eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:35.326803 1085624 cri.go:89] found id: "8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:35.326809 1085624 cri.go:89] found id: ""
	I1011 21:53:35.326816 1085624 logs.go:282] 2 containers: [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e]
	I1011 21:53:35.326870 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.331050 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.335186 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 21:53:35.335263 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 21:53:35.386234 1085624 cri.go:89] found id: "35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:35.386257 1085624 cri.go:89] found id: ""
	I1011 21:53:35.386299 1085624 logs.go:282] 1 containers: [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3]
	I1011 21:53:35.386354 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.390098 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1011 21:53:35.390174 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1011 21:53:35.437777 1085624 cri.go:89] found id: "2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:35.437802 1085624 cri.go:89] found id: "eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:35.437807 1085624 cri.go:89] found id: ""
	I1011 21:53:35.437815 1085624 logs.go:282] 2 containers: [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8]
	I1011 21:53:35.437870 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.442836 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:35.446492 1085624 logs.go:123] Gathering logs for kindnet [8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e] ...
	I1011 21:53:35.446518 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:35.496118 1085624 logs.go:123] Gathering logs for containerd ...
	I1011 21:53:35.496148 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1011 21:53:35.575504 1085624 logs.go:123] Gathering logs for etcd [82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab] ...
	I1011 21:53:35.575546 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:35.635236 1085624 logs.go:123] Gathering logs for coredns [eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848] ...
	I1011 21:53:35.635267 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:35.680667 1085624 logs.go:123] Gathering logs for kube-scheduler [5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7] ...
	I1011 21:53:35.680697 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:35.736511 1085624 logs.go:123] Gathering logs for kube-controller-manager [ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb] ...
	I1011 21:53:35.736543 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:35.817611 1085624 logs.go:123] Gathering logs for storage-provisioner [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480] ...
	I1011 21:53:35.817642 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:35.886476 1085624 logs.go:123] Gathering logs for container status ...
	I1011 21:53:35.886505 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:53:35.944581 1085624 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:53:35.944611 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:53:36.129491 1085624 logs.go:123] Gathering logs for etcd [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1] ...
	I1011 21:53:36.129525 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:36.186921 1085624 logs.go:123] Gathering logs for coredns [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb] ...
	I1011 21:53:36.186951 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:36.240289 1085624 logs.go:123] Gathering logs for kubernetes-dashboard [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3] ...
	I1011 21:53:36.240319 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:36.295024 1085624 logs.go:123] Gathering logs for dmesg ...
	I1011 21:53:36.295056 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:53:36.311832 1085624 logs.go:123] Gathering logs for kube-proxy [032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3] ...
	I1011 21:53:36.311858 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:36.365599 1085624 logs.go:123] Gathering logs for kindnet [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e] ...
	I1011 21:53:36.365630 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:36.432040 1085624 logs.go:123] Gathering logs for kube-scheduler [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466] ...
	I1011 21:53:36.432125 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:36.481093 1085624 logs.go:123] Gathering logs for kube-proxy [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31] ...
	I1011 21:53:36.481168 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:36.528111 1085624 logs.go:123] Gathering logs for kube-controller-manager [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad] ...
	I1011 21:53:36.528197 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:36.601710 1085624 logs.go:123] Gathering logs for storage-provisioner [eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8] ...
	I1011 21:53:36.601784 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:36.644577 1085624 logs.go:123] Gathering logs for kubelet ...
	I1011 21:53:36.644650 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:53:36.704541 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:06 old-k8s-version-310298 kubelet[666]: E1011 21:48:06.555132     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-310298" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-310298' and this object
	W1011 21:53:36.708245 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:07 old-k8s-version-310298 kubelet[666]: E1011 21:48:07.898796     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.708442 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:08 old-k8s-version-310298 kubelet[666]: E1011 21:48:08.734852     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.711555 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:22 old-k8s-version-310298 kubelet[666]: E1011 21:48:22.357549     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.713401 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.350204     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.713733 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.842524     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.714598 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:35 old-k8s-version-310298 kubelet[666]: E1011 21:48:35.846225     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.714931 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.052714     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.715374 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.856716     666 pod_workers.go:191] Error syncing pod 90e529b8-25e8-41b1-9f66-bfd3ec245bf3 ("storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"
	W1011 21:53:36.718188 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:47 old-k8s-version-310298 kubelet[666]: E1011 21:48:47.365069     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.720517 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:51 old-k8s-version-310298 kubelet[666]: E1011 21:48:51.891977     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.720883 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:58 old-k8s-version-310298 kubelet[666]: E1011 21:48:58.053122     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.721072 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:00 old-k8s-version-310298 kubelet[666]: E1011 21:49:00.350678     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.721405 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:10 old-k8s-version-310298 kubelet[666]: E1011 21:49:10.350114     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.721596 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:13 old-k8s-version-310298 kubelet[666]: E1011 21:49:13.351545     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.722333 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:25 old-k8s-version-310298 kubelet[666]: E1011 21:49:25.986763     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.722540 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:27 old-k8s-version-310298 kubelet[666]: E1011 21:49:27.349958     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.722965 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:28 old-k8s-version-310298 kubelet[666]: E1011 21:49:28.052147     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.725465 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:38 old-k8s-version-310298 kubelet[666]: E1011 21:49:38.361564     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.725800 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:39 old-k8s-version-310298 kubelet[666]: E1011 21:49:39.350464     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.725988 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:49 old-k8s-version-310298 kubelet[666]: E1011 21:49:49.350331     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.726339 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:52 old-k8s-version-310298 kubelet[666]: E1011 21:49:52.349607     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.726524 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:02 old-k8s-version-310298 kubelet[666]: E1011 21:50:02.350044     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.726867 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:03 old-k8s-version-310298 kubelet[666]: E1011 21:50:03.353309     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.727051 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:14 old-k8s-version-310298 kubelet[666]: E1011 21:50:14.349894     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.727646 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:16 old-k8s-version-310298 kubelet[666]: E1011 21:50:16.174777     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.727979 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:18 old-k8s-version-310298 kubelet[666]: E1011 21:50:18.052598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.728163 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:25 old-k8s-version-310298 kubelet[666]: E1011 21:50:25.351630     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.728494 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:32 old-k8s-version-310298 kubelet[666]: E1011 21:50:32.349548     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.728681 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:38 old-k8s-version-310298 kubelet[666]: E1011 21:50:38.350026     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.729010 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:44 old-k8s-version-310298 kubelet[666]: E1011 21:50:44.349518     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.729195 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:50 old-k8s-version-310298 kubelet[666]: E1011 21:50:50.350344     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.729527 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:57 old-k8s-version-310298 kubelet[666]: E1011 21:50:57.350003     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.732194 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:03 old-k8s-version-310298 kubelet[666]: E1011 21:51:03.362364     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:36.732632 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:10 old-k8s-version-310298 kubelet[666]: E1011 21:51:10.349716     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.732836 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:14 old-k8s-version-310298 kubelet[666]: E1011 21:51:14.349911     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.733191 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:22 old-k8s-version-310298 kubelet[666]: E1011 21:51:22.349634     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.733391 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:25 old-k8s-version-310298 kubelet[666]: E1011 21:51:25.350570     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.734045 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:38 old-k8s-version-310298 kubelet[666]: E1011 21:51:38.414410     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.734241 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:40 old-k8s-version-310298 kubelet[666]: E1011 21:51:40.350770     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.734639 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:48 old-k8s-version-310298 kubelet[666]: E1011 21:51:48.052903     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.734874 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:52 old-k8s-version-310298 kubelet[666]: E1011 21:51:52.349920     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.735251 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:00 old-k8s-version-310298 kubelet[666]: E1011 21:52:00.354871     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.735538 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:06 old-k8s-version-310298 kubelet[666]: E1011 21:52:06.350036     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.735906 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:12 old-k8s-version-310298 kubelet[666]: E1011 21:52:12.349554     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.736125 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:21 old-k8s-version-310298 kubelet[666]: E1011 21:52:21.350139     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.736560 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:27 old-k8s-version-310298 kubelet[666]: E1011 21:52:27.349598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.736792 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:32 old-k8s-version-310298 kubelet[666]: E1011 21:52:32.349985     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.737162 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: E1011 21:52:38.349563     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.737379 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:43 old-k8s-version-310298 kubelet[666]: E1011 21:52:43.350681     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.737757 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: E1011 21:52:52.349495     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.737974 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:54 old-k8s-version-310298 kubelet[666]: E1011 21:52:54.349999     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.738333 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: E1011 21:53:04.349825     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.738534 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.738963 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.739154 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.739364 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.739719 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:36.739742 1085624 logs.go:123] Gathering logs for kube-apiserver [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094] ...
	I1011 21:53:36.739760 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:36.834629 1085624 logs.go:123] Gathering logs for kube-apiserver [5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b] ...
	I1011 21:53:36.834706 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:36.904644 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:36.904674 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:53:36.904730 1085624 out.go:270] X Problems detected in kubelet:
	W1011 21:53:36.904744 1085624 out.go:270]   Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904753 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:36.904768 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904774 1085624 out.go:270]   Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:36.904780 1085624 out.go:270]   Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:36.904803 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:36.904815 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:34.184647 1096229 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1011 21:53:34.184889 1096229 start.go:159] libmachine.API.Create for "embed-certs-159135" (driver="docker")
	I1011 21:53:34.184921 1096229 client.go:168] LocalClient.Create starting
	I1011 21:53:34.184982 1096229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem
	I1011 21:53:34.185031 1096229 main.go:141] libmachine: Decoding PEM data...
	I1011 21:53:34.185046 1096229 main.go:141] libmachine: Parsing certificate...
	I1011 21:53:34.185103 1096229 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem
	I1011 21:53:34.185128 1096229 main.go:141] libmachine: Decoding PEM data...
	I1011 21:53:34.185143 1096229 main.go:141] libmachine: Parsing certificate...
	I1011 21:53:34.185526 1096229 cli_runner.go:164] Run: docker network inspect embed-certs-159135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1011 21:53:34.201324 1096229 cli_runner.go:211] docker network inspect embed-certs-159135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1011 21:53:34.201417 1096229 network_create.go:284] running [docker network inspect embed-certs-159135] to gather additional debugging logs...
	I1011 21:53:34.201442 1096229 cli_runner.go:164] Run: docker network inspect embed-certs-159135
	W1011 21:53:34.217012 1096229 cli_runner.go:211] docker network inspect embed-certs-159135 returned with exit code 1
	I1011 21:53:34.217048 1096229 network_create.go:287] error running [docker network inspect embed-certs-159135]: docker network inspect embed-certs-159135: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-159135 not found
	I1011 21:53:34.217061 1096229 network_create.go:289] output of [docker network inspect embed-certs-159135]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-159135 not found
	
	** /stderr **
	I1011 21:53:34.217164 1096229 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 21:53:34.234120 1096229 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a84816c0b608 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ce:0f:5d:9f} reservation:<nil>}
	I1011 21:53:34.234959 1096229 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f4867bba202d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:5e:ee:91:87} reservation:<nil>}
	I1011 21:53:34.236642 1096229 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c67640b8697f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:1c:41:42:04} reservation:<nil>}
	I1011 21:53:34.237033 1096229 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bf142940a760 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f1:fe:74:a7} reservation:<nil>}
	I1011 21:53:34.237559 1096229 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e4180}
	I1011 21:53:34.237618 1096229 network_create.go:124] attempt to create docker network embed-certs-159135 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1011 21:53:34.237682 1096229 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-159135 embed-certs-159135
	I1011 21:53:34.312108 1096229 network_create.go:108] docker network embed-certs-159135 192.168.85.0/24 created
	I1011 21:53:34.312140 1096229 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-159135" container
	I1011 21:53:34.312223 1096229 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1011 21:53:34.326712 1096229 cli_runner.go:164] Run: docker volume create embed-certs-159135 --label name.minikube.sigs.k8s.io=embed-certs-159135 --label created_by.minikube.sigs.k8s.io=true
	I1011 21:53:34.342909 1096229 oci.go:103] Successfully created a docker volume embed-certs-159135
	I1011 21:53:34.342995 1096229 cli_runner.go:164] Run: docker run --rm --name embed-certs-159135-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-159135 --entrypoint /usr/bin/test -v embed-certs-159135:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1011 21:53:34.976504 1096229 oci.go:107] Successfully prepared a docker volume embed-certs-159135
	I1011 21:53:34.976567 1096229 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 21:53:34.976596 1096229 kic.go:194] Starting extracting preloaded images to volume ...
	I1011 21:53:34.976666 1096229 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-159135:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1011 21:53:40.394070 1096229 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-159135:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (5.417366946s)
	I1011 21:53:40.394133 1096229 kic.go:203] duration metric: took 5.417543708s to extract preloaded images to volume ...
	W1011 21:53:40.394415 1096229 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1011 21:53:40.394545 1096229 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1011 21:53:40.449628 1096229 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-159135 --name embed-certs-159135 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-159135 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-159135 --network embed-certs-159135 --ip 192.168.85.2 --volume embed-certs-159135:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1011 21:53:40.796828 1096229 cli_runner.go:164] Run: docker container inspect embed-certs-159135 --format={{.State.Running}}
	I1011 21:53:40.816612 1096229 cli_runner.go:164] Run: docker container inspect embed-certs-159135 --format={{.State.Status}}
	I1011 21:53:40.838089 1096229 cli_runner.go:164] Run: docker exec embed-certs-159135 stat /var/lib/dpkg/alternatives/iptables
	I1011 21:53:40.924954 1096229 oci.go:144] the created container "embed-certs-159135" has a running status.
	I1011 21:53:40.924982 1096229 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa...
	I1011 21:53:41.323275 1096229 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1011 21:53:41.346768 1096229 cli_runner.go:164] Run: docker container inspect embed-certs-159135 --format={{.State.Status}}
	I1011 21:53:41.371396 1096229 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1011 21:53:41.371416 1096229 kic_runner.go:114] Args: [docker exec --privileged embed-certs-159135 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1011 21:53:41.467881 1096229 cli_runner.go:164] Run: docker container inspect embed-certs-159135 --format={{.State.Status}}
	I1011 21:53:41.492534 1096229 machine.go:93] provisionDockerMachine start ...
	I1011 21:53:41.492635 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:41.520009 1096229 main.go:141] libmachine: Using SSH client type: native
	I1011 21:53:41.520319 1096229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I1011 21:53:41.520335 1096229 main.go:141] libmachine: About to run SSH command:
	hostname
	I1011 21:53:41.718981 1096229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-159135
	
	I1011 21:53:41.719018 1096229 ubuntu.go:169] provisioning hostname "embed-certs-159135"
	I1011 21:53:41.719097 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:41.747702 1096229 main.go:141] libmachine: Using SSH client type: native
	I1011 21:53:41.747974 1096229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I1011 21:53:41.747995 1096229 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-159135 && echo "embed-certs-159135" | sudo tee /etc/hostname
	I1011 21:53:41.901199 1096229 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-159135
	
	I1011 21:53:41.901350 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:41.925894 1096229 main.go:141] libmachine: Using SSH client type: native
	I1011 21:53:41.926135 1096229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 34180 <nil> <nil>}
	I1011 21:53:41.926153 1096229 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-159135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-159135/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-159135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1011 21:53:42.063444 1096229 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1011 21:53:42.063539 1096229 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19749-870468/.minikube CaCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19749-870468/.minikube}
	I1011 21:53:42.063592 1096229 ubuntu.go:177] setting up certificates
	I1011 21:53:42.063625 1096229 provision.go:84] configureAuth start
	I1011 21:53:42.063740 1096229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-159135
	I1011 21:53:42.092162 1096229 provision.go:143] copyHostCerts
	I1011 21:53:42.092228 1096229 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem, removing ...
	I1011 21:53:42.092250 1096229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem
	I1011 21:53:42.092412 1096229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/ca.pem (1078 bytes)
	I1011 21:53:42.092589 1096229 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem, removing ...
	I1011 21:53:42.092598 1096229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem
	I1011 21:53:42.092648 1096229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/cert.pem (1123 bytes)
	I1011 21:53:42.092720 1096229 exec_runner.go:144] found /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem, removing ...
	I1011 21:53:42.092726 1096229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem
	I1011 21:53:42.092766 1096229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19749-870468/.minikube/key.pem (1675 bytes)
	I1011 21:53:42.092834 1096229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem org=jenkins.embed-certs-159135 san=[127.0.0.1 192.168.85.2 embed-certs-159135 localhost minikube]
	I1011 21:53:42.408655 1096229 provision.go:177] copyRemoteCerts
	I1011 21:53:42.408735 1096229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1011 21:53:42.408785 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:42.428345 1096229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa Username:docker}
	I1011 21:53:42.523357 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1011 21:53:42.549039 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1011 21:53:42.573889 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1011 21:53:42.599513 1096229 provision.go:87] duration metric: took 535.859928ms to configureAuth
	I1011 21:53:42.599540 1096229 ubuntu.go:193] setting minikube options for container-runtime
	I1011 21:53:42.599739 1096229 config.go:182] Loaded profile config "embed-certs-159135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:53:42.599753 1096229 machine.go:96] duration metric: took 1.107199906s to provisionDockerMachine
	I1011 21:53:42.599760 1096229 client.go:171] duration metric: took 8.414833246s to LocalClient.Create
	I1011 21:53:42.599775 1096229 start.go:167] duration metric: took 8.41488648s to libmachine.API.Create "embed-certs-159135"
	I1011 21:53:42.599787 1096229 start.go:293] postStartSetup for "embed-certs-159135" (driver="docker")
	I1011 21:53:42.599796 1096229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1011 21:53:42.599854 1096229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1011 21:53:42.599906 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:42.616281 1096229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa Username:docker}
	I1011 21:53:42.716288 1096229 ssh_runner.go:195] Run: cat /etc/os-release
	I1011 21:53:42.719674 1096229 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1011 21:53:42.719719 1096229 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1011 21:53:42.719731 1096229 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1011 21:53:42.719738 1096229 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1011 21:53:42.719749 1096229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/addons for local assets ...
	I1011 21:53:42.719812 1096229 filesync.go:126] Scanning /home/jenkins/minikube-integration/19749-870468/.minikube/files for local assets ...
	I1011 21:53:42.719898 1096229 filesync.go:149] local asset: /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem -> 8758612.pem in /etc/ssl/certs
	I1011 21:53:42.720009 1096229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1011 21:53:42.728728 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem --> /etc/ssl/certs/8758612.pem (1708 bytes)
	I1011 21:53:42.753559 1096229 start.go:296] duration metric: took 153.757025ms for postStartSetup
	I1011 21:53:42.753943 1096229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-159135
	I1011 21:53:42.774873 1096229 profile.go:143] Saving config to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/config.json ...
	I1011 21:53:42.775160 1096229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:53:42.775214 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:42.791580 1096229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa Username:docker}
	I1011 21:53:42.883714 1096229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1011 21:53:42.888422 1096229 start.go:128] duration metric: took 8.706196361s to createHost
	I1011 21:53:42.888446 1096229 start.go:83] releasing machines lock for "embed-certs-159135", held for 8.706339531s
	I1011 21:53:42.888526 1096229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-159135
	I1011 21:53:42.905169 1096229 ssh_runner.go:195] Run: cat /version.json
	I1011 21:53:42.905212 1096229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1011 21:53:42.905221 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:42.905273 1096229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-159135
	I1011 21:53:42.922689 1096229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa Username:docker}
	I1011 21:53:42.924397 1096229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34180 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/embed-certs-159135/id_rsa Username:docker}
	I1011 21:53:43.146121 1096229 ssh_runner.go:195] Run: systemctl --version
	I1011 21:53:43.150647 1096229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1011 21:53:43.155052 1096229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1011 21:53:43.182587 1096229 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1011 21:53:43.182679 1096229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1011 21:53:43.213655 1096229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1011 21:53:43.213681 1096229 start.go:495] detecting cgroup driver to use...
	I1011 21:53:43.213713 1096229 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1011 21:53:43.213768 1096229 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1011 21:53:43.226441 1096229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1011 21:53:43.238366 1096229 docker.go:217] disabling cri-docker service (if available) ...
	I1011 21:53:43.238481 1096229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1011 21:53:43.252630 1096229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1011 21:53:43.267556 1096229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1011 21:53:43.362452 1096229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1011 21:53:43.457524 1096229 docker.go:233] disabling docker service ...
	I1011 21:53:43.457605 1096229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1011 21:53:43.481056 1096229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1011 21:53:43.494322 1096229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1011 21:53:43.577403 1096229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1011 21:53:43.667034 1096229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1011 21:53:43.678642 1096229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1011 21:53:43.696458 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1011 21:53:43.707132 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1011 21:53:43.718658 1096229 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1011 21:53:43.718753 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1011 21:53:43.729658 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 21:53:43.740368 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1011 21:53:43.751058 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1011 21:53:43.760792 1096229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1011 21:53:43.770704 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1011 21:53:43.780626 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1011 21:53:43.790699 1096229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1011 21:53:43.801623 1096229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1011 21:53:43.810238 1096229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1011 21:53:43.820561 1096229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:53:43.914512 1096229 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1011 21:53:44.053949 1096229 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1011 21:53:44.054063 1096229 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1011 21:53:44.057869 1096229 start.go:563] Will wait 60s for crictl version
	I1011 21:53:44.057961 1096229 ssh_runner.go:195] Run: which crictl
	I1011 21:53:44.061285 1096229 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1011 21:53:44.102087 1096229 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1011 21:53:44.102185 1096229 ssh_runner.go:195] Run: containerd --version
	I1011 21:53:44.132312 1096229 ssh_runner.go:195] Run: containerd --version
	I1011 21:53:44.156774 1096229 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1011 21:53:44.159304 1096229 cli_runner.go:164] Run: docker network inspect embed-certs-159135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1011 21:53:44.174562 1096229 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1011 21:53:44.178072 1096229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:53:44.189013 1096229 kubeadm.go:883] updating cluster {Name:embed-certs-159135 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-159135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1011 21:53:44.189128 1096229 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 21:53:44.189192 1096229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:53:44.225797 1096229 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 21:53:44.225818 1096229 containerd.go:534] Images already preloaded, skipping extraction
	I1011 21:53:44.225876 1096229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1011 21:53:44.263574 1096229 containerd.go:627] all images are preloaded for containerd runtime.
	I1011 21:53:44.263599 1096229 cache_images.go:84] Images are preloaded, skipping loading
	I1011 21:53:44.263608 1096229 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I1011 21:53:44.263697 1096229 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-159135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-159135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1011 21:53:44.263765 1096229 ssh_runner.go:195] Run: sudo crictl info
	I1011 21:53:44.303402 1096229 cni.go:84] Creating CNI manager for ""
	I1011 21:53:44.303427 1096229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 21:53:44.303438 1096229 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1011 21:53:44.303490 1096229 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-159135 NodeName:embed-certs-159135 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1011 21:53:44.303642 1096229 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-159135"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1011 21:53:44.303718 1096229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1011 21:53:44.314027 1096229 binaries.go:44] Found k8s binaries, skipping transfer
	I1011 21:53:44.314098 1096229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1011 21:53:44.322738 1096229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1011 21:53:44.340026 1096229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1011 21:53:44.358681 1096229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1011 21:53:44.377919 1096229 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1011 21:53:44.381498 1096229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1011 21:53:44.392567 1096229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1011 21:53:44.489652 1096229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1011 21:53:44.505658 1096229 certs.go:68] Setting up /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135 for IP: 192.168.85.2
	I1011 21:53:44.505677 1096229 certs.go:194] generating shared ca certs ...
	I1011 21:53:44.505694 1096229 certs.go:226] acquiring lock for ca certs: {Name:mk314562fa38b26f30da8f33a861c5cef3708653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:44.505831 1096229 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key
	I1011 21:53:44.505870 1096229 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key
	I1011 21:53:44.505877 1096229 certs.go:256] generating profile certs ...
	I1011 21:53:44.505936 1096229 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.key
	I1011 21:53:44.505946 1096229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.crt with IP's: []
	I1011 21:53:45.010453 1096229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.crt ...
	I1011 21:53:45.010496 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.crt: {Name:mk2b39cb74503878d508a95308899dd297097cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.011660 1096229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.key ...
	I1011 21:53:45.011692 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/client.key: {Name:mk229d5e636477501890bab068ef0a4c462efd69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.017496 1096229 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key.a2b2b3ef
	I1011 21:53:45.017579 1096229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt.a2b2b3ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1011 21:53:45.553115 1096229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt.a2b2b3ef ...
	I1011 21:53:45.553145 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt.a2b2b3ef: {Name:mkef631f34bcc96caf6a58c07b9dd423e007ac1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.553757 1096229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key.a2b2b3ef ...
	I1011 21:53:45.553777 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key.a2b2b3ef: {Name:mk9a66f229139574f6b49ca765da982e8c3be6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.554456 1096229 certs.go:381] copying /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt.a2b2b3ef -> /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt
	I1011 21:53:45.554547 1096229 certs.go:385] copying /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key.a2b2b3ef -> /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key
	I1011 21:53:45.554694 1096229 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.key
	I1011 21:53:45.554720 1096229 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.crt with IP's: []
	I1011 21:53:45.760060 1096229 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.crt ...
	I1011 21:53:45.760098 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.crt: {Name:mk9b9b77415fffd8aed9458bf4377e9c96d5945b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.760795 1096229 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.key ...
	I1011 21:53:45.760812 1096229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.key: {Name:mk879c202f0f572039f75058fad2560881175c8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1011 21:53:45.761602 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861.pem (1338 bytes)
	W1011 21:53:45.761650 1096229 certs.go:480] ignoring /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861_empty.pem, impossibly tiny 0 bytes
	I1011 21:53:45.761664 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca-key.pem (1675 bytes)
	I1011 21:53:45.761687 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/ca.pem (1078 bytes)
	I1011 21:53:45.761714 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/cert.pem (1123 bytes)
	I1011 21:53:45.761743 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/certs/key.pem (1675 bytes)
	I1011 21:53:45.761793 1096229 certs.go:484] found cert: /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem (1708 bytes)
	I1011 21:53:45.762429 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1011 21:53:45.788870 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1011 21:53:45.814076 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1011 21:53:45.843049 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1011 21:53:45.866955 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1011 21:53:45.893488 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1011 21:53:45.919107 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1011 21:53:45.943635 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/embed-certs-159135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1011 21:53:45.968819 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/ssl/certs/8758612.pem --> /usr/share/ca-certificates/8758612.pem (1708 bytes)
	I1011 21:53:45.995382 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1011 21:53:46.026477 1096229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19749-870468/.minikube/certs/875861.pem --> /usr/share/ca-certificates/875861.pem (1338 bytes)
	I1011 21:53:46.051867 1096229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1011 21:53:46.071061 1096229 ssh_runner.go:195] Run: openssl version
	I1011 21:53:46.076940 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/875861.pem && ln -fs /usr/share/ca-certificates/875861.pem /etc/ssl/certs/875861.pem"
	I1011 21:53:46.086916 1096229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/875861.pem
	I1011 21:53:46.090576 1096229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 11 21:08 /usr/share/ca-certificates/875861.pem
	I1011 21:53:46.090650 1096229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/875861.pem
	I1011 21:53:46.097760 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/875861.pem /etc/ssl/certs/51391683.0"
	I1011 21:53:46.107645 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8758612.pem && ln -fs /usr/share/ca-certificates/8758612.pem /etc/ssl/certs/8758612.pem"
	I1011 21:53:46.117437 1096229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8758612.pem
	I1011 21:53:46.121489 1096229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 11 21:08 /usr/share/ca-certificates/8758612.pem
	I1011 21:53:46.121550 1096229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8758612.pem
	I1011 21:53:46.131763 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8758612.pem /etc/ssl/certs/3ec20f2e.0"
	I1011 21:53:46.141909 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1011 21:53:46.151654 1096229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:53:46.155988 1096229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 11 20:58 /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:53:46.156050 1096229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1011 21:53:46.163602 1096229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1011 21:53:46.173281 1096229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1011 21:53:46.177546 1096229 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1011 21:53:46.177606 1096229 kubeadm.go:392] StartCluster: {Name:embed-certs-159135 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-159135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:53:46.177683 1096229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1011 21:53:46.177746 1096229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1011 21:53:46.220361 1096229 cri.go:89] found id: ""
	I1011 21:53:46.220429 1096229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1011 21:53:46.229652 1096229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1011 21:53:46.238974 1096229 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1011 21:53:46.239048 1096229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1011 21:53:46.248129 1096229 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1011 21:53:46.248149 1096229 kubeadm.go:157] found existing configuration files:
	
	I1011 21:53:46.248224 1096229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1011 21:53:46.257059 1096229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1011 21:53:46.257125 1096229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1011 21:53:46.265576 1096229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1011 21:53:46.274867 1096229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1011 21:53:46.274951 1096229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1011 21:53:46.283337 1096229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1011 21:53:46.292563 1096229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1011 21:53:46.292626 1096229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1011 21:53:46.301070 1096229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1011 21:53:46.310118 1096229 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1011 21:53:46.310212 1096229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1011 21:53:46.319438 1096229 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1011 21:53:46.364742 1096229 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1011 21:53:46.364863 1096229 kubeadm.go:310] [preflight] Running pre-flight checks
	I1011 21:53:46.384457 1096229 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1011 21:53:46.384599 1096229 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1011 21:53:46.384657 1096229 kubeadm.go:310] OS: Linux
	I1011 21:53:46.384736 1096229 kubeadm.go:310] CGROUPS_CPU: enabled
	I1011 21:53:46.384805 1096229 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1011 21:53:46.384883 1096229 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1011 21:53:46.384952 1096229 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1011 21:53:46.385031 1096229 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1011 21:53:46.385100 1096229 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1011 21:53:46.385176 1096229 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1011 21:53:46.385246 1096229 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1011 21:53:46.385337 1096229 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1011 21:53:46.451970 1096229 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1011 21:53:46.452132 1096229 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1011 21:53:46.452230 1096229 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1011 21:53:46.461193 1096229 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1011 21:53:46.905745 1085624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:53:46.917698 1085624 api_server.go:72] duration metric: took 5m59.560280279s to wait for apiserver process to appear ...
	I1011 21:53:46.917738 1085624 api_server.go:88] waiting for apiserver healthz status ...
	I1011 21:53:46.917784 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1011 21:53:46.917856 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1011 21:53:46.967727 1085624 cri.go:89] found id: "9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:46.967753 1085624 cri.go:89] found id: "5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:46.967764 1085624 cri.go:89] found id: ""
	I1011 21:53:46.967772 1085624 logs.go:282] 2 containers: [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b]
	I1011 21:53:46.967828 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:46.971860 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:46.975332 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1011 21:53:46.975399 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1011 21:53:47.030587 1085624 cri.go:89] found id: "9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:47.030620 1085624 cri.go:89] found id: "82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:47.030627 1085624 cri.go:89] found id: ""
	I1011 21:53:47.030634 1085624 logs.go:282] 2 containers: [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab]
	I1011 21:53:47.030701 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.037536 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.041167 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1011 21:53:47.041292 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1011 21:53:47.088464 1085624 cri.go:89] found id: "cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:47.088496 1085624 cri.go:89] found id: "eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:47.088502 1085624 cri.go:89] found id: ""
	I1011 21:53:47.088510 1085624 logs.go:282] 2 containers: [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848]
	I1011 21:53:47.088584 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.092201 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.095599 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1011 21:53:47.095668 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1011 21:53:47.163423 1085624 cri.go:89] found id: "407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:47.163496 1085624 cri.go:89] found id: "5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:47.163516 1085624 cri.go:89] found id: ""
	I1011 21:53:47.163537 1085624 logs.go:282] 2 containers: [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7]
	I1011 21:53:47.163621 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.167536 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.171184 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1011 21:53:47.171300 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1011 21:53:47.217944 1085624 cri.go:89] found id: "bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:47.218018 1085624 cri.go:89] found id: "032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:47.218037 1085624 cri.go:89] found id: ""
	I1011 21:53:47.218059 1085624 logs.go:282] 2 containers: [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3]
	I1011 21:53:47.218139 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.221936 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.225534 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1011 21:53:47.225649 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1011 21:53:47.305085 1085624 cri.go:89] found id: "be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:47.305157 1085624 cri.go:89] found id: "ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:47.305179 1085624 cri.go:89] found id: ""
	I1011 21:53:47.305205 1085624 logs.go:282] 2 containers: [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb]
	I1011 21:53:47.305298 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.309800 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.313696 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1011 21:53:47.313816 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1011 21:53:47.367189 1085624 cri.go:89] found id: "eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:47.367261 1085624 cri.go:89] found id: "8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:47.367281 1085624 cri.go:89] found id: ""
	I1011 21:53:47.367307 1085624 logs.go:282] 2 containers: [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e]
	I1011 21:53:47.367399 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.371197 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.374814 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1011 21:53:47.374923 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1011 21:53:47.427202 1085624 cri.go:89] found id: "35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:47.427272 1085624 cri.go:89] found id: ""
	I1011 21:53:47.427295 1085624 logs.go:282] 1 containers: [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3]
	I1011 21:53:47.427383 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.431104 1085624 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1011 21:53:47.431220 1085624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1011 21:53:47.479983 1085624 cri.go:89] found id: "2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:47.480057 1085624 cri.go:89] found id: "eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:47.480077 1085624 cri.go:89] found id: ""
	I1011 21:53:47.480103 1085624 logs.go:282] 2 containers: [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8]
	I1011 21:53:47.480192 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.483906 1085624 ssh_runner.go:195] Run: which crictl
	I1011 21:53:47.487547 1085624 logs.go:123] Gathering logs for etcd [82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab] ...
	I1011 21:53:47.487608 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab"
	I1011 21:53:47.540211 1085624 logs.go:123] Gathering logs for kube-controller-manager [ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb] ...
	I1011 21:53:47.540289 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb"
	I1011 21:53:47.613759 1085624 logs.go:123] Gathering logs for kubelet ...
	I1011 21:53:47.613836 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1011 21:53:47.721084 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:06 old-k8s-version-310298 kubelet[666]: E1011 21:48:06.555132     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-310298" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-310298' and this object
	W1011 21:53:47.724782 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:07 old-k8s-version-310298 kubelet[666]: E1011 21:48:07.898796     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.725034 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:08 old-k8s-version-310298 kubelet[666]: E1011 21:48:08.734852     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.730739 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:22 old-k8s-version-310298 kubelet[666]: E1011 21:48:22.357549     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.732638 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.350204     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.733019 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:34 old-k8s-version-310298 kubelet[666]: E1011 21:48:34.842524     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.733834 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:35 old-k8s-version-310298 kubelet[666]: E1011 21:48:35.846225     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.734191 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.052714     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.734680 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:38 old-k8s-version-310298 kubelet[666]: E1011 21:48:38.856716     666 pod_workers.go:191] Error syncing pod 90e529b8-25e8-41b1-9f66-bfd3ec245bf3 ("storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90e529b8-25e8-41b1-9f66-bfd3ec245bf3)"
	W1011 21:53:47.737472 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:47 old-k8s-version-310298 kubelet[666]: E1011 21:48:47.365069     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.738216 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:51 old-k8s-version-310298 kubelet[666]: E1011 21:48:51.891977     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.738581 1085624 logs.go:138] Found kubelet problem: Oct 11 21:48:58 old-k8s-version-310298 kubelet[666]: E1011 21:48:58.053122     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.738795 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:00 old-k8s-version-310298 kubelet[666]: E1011 21:49:00.350678     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.739153 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:10 old-k8s-version-310298 kubelet[666]: E1011 21:49:10.350114     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.739369 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:13 old-k8s-version-310298 kubelet[666]: E1011 21:49:13.351545     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.740033 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:25 old-k8s-version-310298 kubelet[666]: E1011 21:49:25.986763     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.740248 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:27 old-k8s-version-310298 kubelet[666]: E1011 21:49:27.349958     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.740605 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:28 old-k8s-version-310298 kubelet[666]: E1011 21:49:28.052147     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.743293 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:38 old-k8s-version-310298 kubelet[666]: E1011 21:49:38.361564     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.743630 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:39 old-k8s-version-310298 kubelet[666]: E1011 21:49:39.350464     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.743812 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:49 old-k8s-version-310298 kubelet[666]: E1011 21:49:49.350331     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.744136 1085624 logs.go:138] Found kubelet problem: Oct 11 21:49:52 old-k8s-version-310298 kubelet[666]: E1011 21:49:52.349607     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.744316 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:02 old-k8s-version-310298 kubelet[666]: E1011 21:50:02.350044     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.744646 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:03 old-k8s-version-310298 kubelet[666]: E1011 21:50:03.353309     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.744826 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:14 old-k8s-version-310298 kubelet[666]: E1011 21:50:14.349894     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.745409 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:16 old-k8s-version-310298 kubelet[666]: E1011 21:50:16.174777     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.745736 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:18 old-k8s-version-310298 kubelet[666]: E1011 21:50:18.052598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.745914 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:25 old-k8s-version-310298 kubelet[666]: E1011 21:50:25.351630     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.746239 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:32 old-k8s-version-310298 kubelet[666]: E1011 21:50:32.349548     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.746549 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:38 old-k8s-version-310298 kubelet[666]: E1011 21:50:38.350026     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.746932 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:44 old-k8s-version-310298 kubelet[666]: E1011 21:50:44.349518     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.747144 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:50 old-k8s-version-310298 kubelet[666]: E1011 21:50:50.350344     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.747503 1085624 logs.go:138] Found kubelet problem: Oct 11 21:50:57 old-k8s-version-310298 kubelet[666]: E1011 21:50:57.350003     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.749960 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:03 old-k8s-version-310298 kubelet[666]: E1011 21:51:03.362364     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1011 21:53:47.750360 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:10 old-k8s-version-310298 kubelet[666]: E1011 21:51:10.349716     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.750584 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:14 old-k8s-version-310298 kubelet[666]: E1011 21:51:14.349911     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.750937 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:22 old-k8s-version-310298 kubelet[666]: E1011 21:51:22.349634     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.751150 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:25 old-k8s-version-310298 kubelet[666]: E1011 21:51:25.350570     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.751764 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:38 old-k8s-version-310298 kubelet[666]: E1011 21:51:38.414410     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.751978 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:40 old-k8s-version-310298 kubelet[666]: E1011 21:51:40.350770     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.752334 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:48 old-k8s-version-310298 kubelet[666]: E1011 21:51:48.052903     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.752546 1085624 logs.go:138] Found kubelet problem: Oct 11 21:51:52 old-k8s-version-310298 kubelet[666]: E1011 21:51:52.349920     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.752902 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:00 old-k8s-version-310298 kubelet[666]: E1011 21:52:00.354871     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.753114 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:06 old-k8s-version-310298 kubelet[666]: E1011 21:52:06.350036     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.753471 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:12 old-k8s-version-310298 kubelet[666]: E1011 21:52:12.349554     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.753719 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:21 old-k8s-version-310298 kubelet[666]: E1011 21:52:21.350139     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.754079 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:27 old-k8s-version-310298 kubelet[666]: E1011 21:52:27.349598     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.754308 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:32 old-k8s-version-310298 kubelet[666]: E1011 21:52:32.349985     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.754669 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: E1011 21:52:38.349563     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.754878 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:43 old-k8s-version-310298 kubelet[666]: E1011 21:52:43.350681     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.755237 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: E1011 21:52:52.349495     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.755451 1085624 logs.go:138] Found kubelet problem: Oct 11 21:52:54 old-k8s-version-310298 kubelet[666]: E1011 21:52:54.349999     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.755814 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: E1011 21:53:04.349825     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.756026 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.756386 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.756697 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.756908 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.757269 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:47.757484 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:47.757846 1085624 logs.go:138] Found kubelet problem: Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:47.757876 1085624 logs.go:123] Gathering logs for etcd [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1] ...
	I1011 21:53:47.757905 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1"
	I1011 21:53:47.841811 1085624 logs.go:123] Gathering logs for coredns [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb] ...
	I1011 21:53:47.841892 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb"
	I1011 21:53:47.899566 1085624 logs.go:123] Gathering logs for kube-proxy [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31] ...
	I1011 21:53:47.899597 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31"
	I1011 21:53:47.956134 1085624 logs.go:123] Gathering logs for kubernetes-dashboard [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3] ...
	I1011 21:53:47.956171 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3"
	I1011 21:53:48.035155 1085624 logs.go:123] Gathering logs for describe nodes ...
	I1011 21:53:48.035193 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1011 21:53:46.465016 1096229 out.go:235]   - Generating certificates and keys ...
	I1011 21:53:46.465219 1096229 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1011 21:53:46.465341 1096229 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1011 21:53:46.598055 1096229 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1011 21:53:47.998590 1096229 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1011 21:53:48.212314 1096229 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1011 21:53:48.526634 1096229 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1011 21:53:48.244159 1085624 logs.go:123] Gathering logs for kube-apiserver [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094] ...
	I1011 21:53:48.244190 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094"
	I1011 21:53:48.344263 1085624 logs.go:123] Gathering logs for kube-scheduler [5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7] ...
	I1011 21:53:48.344345 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7"
	I1011 21:53:48.417246 1085624 logs.go:123] Gathering logs for kube-proxy [032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3] ...
	I1011 21:53:48.417279 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3"
	I1011 21:53:48.484287 1085624 logs.go:123] Gathering logs for kube-controller-manager [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad] ...
	I1011 21:53:48.484319 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad"
	I1011 21:53:48.575776 1085624 logs.go:123] Gathering logs for kindnet [8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e] ...
	I1011 21:53:48.575813 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e"
	I1011 21:53:48.626351 1085624 logs.go:123] Gathering logs for storage-provisioner [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480] ...
	I1011 21:53:48.626380 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480"
	I1011 21:53:48.677516 1085624 logs.go:123] Gathering logs for storage-provisioner [eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8] ...
	I1011 21:53:48.677544 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8"
	I1011 21:53:48.734382 1085624 logs.go:123] Gathering logs for kube-apiserver [5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b] ...
	I1011 21:53:48.734417 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b"
	I1011 21:53:48.839627 1085624 logs.go:123] Gathering logs for coredns [eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848] ...
	I1011 21:53:48.839663 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848"
	I1011 21:53:48.904674 1085624 logs.go:123] Gathering logs for containerd ...
	I1011 21:53:48.904702 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1011 21:53:48.975439 1085624 logs.go:123] Gathering logs for kindnet [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e] ...
	I1011 21:53:48.975478 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e"
	I1011 21:53:49.044532 1085624 logs.go:123] Gathering logs for container status ...
	I1011 21:53:49.044609 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1011 21:53:49.167140 1085624 logs.go:123] Gathering logs for dmesg ...
	I1011 21:53:49.167219 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1011 21:53:49.225272 1085624 logs.go:123] Gathering logs for kube-scheduler [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466] ...
	I1011 21:53:49.225302 1085624 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466"
	I1011 21:53:49.332611 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:49.332684 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1011 21:53:49.332768 1085624 out.go:270] X Problems detected in kubelet:
	W1011 21:53:49.332812 1085624 out.go:270]   Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332866 1085624 out.go:270]   Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332902 1085624 out.go:270]   Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	W1011 21:53:49.332954 1085624 out.go:270]   Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1011 21:53:49.332986 1085624 out.go:270]   Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	I1011 21:53:49.333037 1085624 out.go:358] Setting ErrFile to fd 2...
	I1011 21:53:49.333059 1085624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:53:49.194614 1096229 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1011 21:53:49.194897 1096229 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-159135 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1011 21:53:49.549802 1096229 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1011 21:53:49.550087 1096229 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-159135 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1011 21:53:50.218147 1096229 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1011 21:53:51.849911 1096229 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1011 21:53:52.222365 1096229 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1011 21:53:52.222639 1096229 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1011 21:53:52.689361 1096229 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1011 21:53:52.955823 1096229 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1011 21:53:53.255489 1096229 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1011 21:53:53.648069 1096229 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1011 21:53:54.326395 1096229 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1011 21:53:54.327052 1096229 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1011 21:53:54.330553 1096229 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1011 21:53:54.332786 1096229 out.go:235]   - Booting up control plane ...
	I1011 21:53:54.332885 1096229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1011 21:53:54.332967 1096229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1011 21:53:54.334289 1096229 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1011 21:53:54.347684 1096229 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1011 21:53:54.355048 1096229 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1011 21:53:54.355111 1096229 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1011 21:53:54.454935 1096229 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1011 21:53:54.455071 1096229 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1011 21:53:55.456369 1096229 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001758413s
	I1011 21:53:55.456460 1096229 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1011 21:53:59.333701 1085624 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1011 21:53:59.469048 1085624 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1011 21:53:59.471215 1085624 out.go:201] 
	W1011 21:53:59.473010 1085624 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1011 21:53:59.473050 1085624 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1011 21:53:59.473069 1085624 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1011 21:53:59.473075 1085624 out.go:270] * 
	W1011 21:53:59.474037 1085624 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1011 21:53:59.475339 1085624 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9cb89d1be38c8       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   6b27f393639d2       dashboard-metrics-scraper-8d5bb5db8-ngf5b
	2cecd3a9f831b       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   53009cefe3342       storage-provisioner
	35b0c02d780d7       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   eae8cb74573cf       kubernetes-dashboard-cd95d586-95mv9
	cd71c6dbeef5b       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   413d91c15cfe0       coredns-74ff55c5b-hvgz4
	8eb885e70cccb       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   f0f80ee3a20ff       busybox
	eedd8607f9f6a       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   53009cefe3342       storage-provisioner
	bc1c5cd415f21       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   d26e8cf6c1a67       kube-proxy-h6nvx
	eb8d210528a03       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                 1                   3aa8343f5654f       kindnet-plhv2
	be825cbc77b19       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   dccac50c32edb       kube-controller-manager-old-k8s-version-310298
	9b44e06318cd0       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   bcd037c7e142c       kube-apiserver-old-k8s-version-310298
	407bb72f6ddb3       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   11bba4daa95b1       kube-scheduler-old-k8s-version-310298
	9d1741ef83bdc       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   95bee05b4b2ef       etcd-old-k8s-version-310298
	a55d0945a569c       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ad5df367a03fc       busybox
	eace983c739ec       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   067d192d23ecb       coredns-74ff55c5b-hvgz4
	8d9728707d00c       0bcd66b03df5f       8 minutes ago       Exited              kindnet-cni                 0                   9027d2553dba9       kindnet-plhv2
	032ba2406be5b       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   5d704450016dd       kube-proxy-h6nvx
	5fd8c74650492       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   8a0c8a42349b5       kube-scheduler-old-k8s-version-310298
	ee8975aa67439       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   1f784ef12d3c3       kube-controller-manager-old-k8s-version-310298
	5af6525b2dd85       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a6c3d37614eaf       kube-apiserver-old-k8s-version-310298
	82b03819cdb0c       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   95c69ed4ca434       etcd-old-k8s-version-310298
	
	
	==> containerd <==
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.401036536Z" level=info msg="StartContainer for \"25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6\""
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.474034246Z" level=info msg="StartContainer for \"25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6\" returns successfully"
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.503656823Z" level=info msg="shim disconnected" id=25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6 namespace=k8s.io
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.503863485Z" level=warning msg="cleaning up after shim disconnected" id=25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6 namespace=k8s.io
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.503887427Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 11 21:50:15 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:15.515426292Z" level=warning msg="cleanup warnings time=\"2024-10-11T21:50:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Oct 11 21:50:16 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:16.176010196Z" level=info msg="RemoveContainer for \"e87db1253abf509050bf32f62a21eb84660492cc9a32ce84da41e26d8f90e75a\""
	Oct 11 21:50:16 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:50:16.182413628Z" level=info msg="RemoveContainer for \"e87db1253abf509050bf32f62a21eb84660492cc9a32ce84da41e26d8f90e75a\" returns successfully"
	Oct 11 21:51:03 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:03.350897893Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:51:03 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:03.356385243Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 11 21:51:03 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:03.357738738Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 11 21:51:03 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:03.357787369Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.354480848Z" level=info msg="CreateContainer within sandbox \"6b27f393639d2522cc0a6723b2e6386bd84b8240deeca7db0f833f35fa2b9f24\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.371276837Z" level=info msg="CreateContainer within sandbox \"6b27f393639d2522cc0a6723b2e6386bd84b8240deeca7db0f833f35fa2b9f24\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8\""
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.371914651Z" level=info msg="StartContainer for \"9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8\""
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.441693341Z" level=info msg="StartContainer for \"9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8\" returns successfully"
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.465234405Z" level=info msg="shim disconnected" id=9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8 namespace=k8s.io
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.465410585Z" level=warning msg="cleaning up after shim disconnected" id=9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8 namespace=k8s.io
	Oct 11 21:51:37 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:37.465433502Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 11 21:51:38 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:38.418150192Z" level=info msg="RemoveContainer for \"25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6\""
	Oct 11 21:51:38 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:51:38.436794549Z" level=info msg="RemoveContainer for \"25e92c957978dac7765eeb0218b490b58dce60c10ed6a0e2a8fa41949dac50a6\" returns successfully"
	Oct 11 21:53:56 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:53:56.351980601Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:56 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:53:56.357036585Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 11 21:53:56 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:53:56.358955017Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 11 21:53:56 old-k8s-version-310298 containerd[574]: time="2024-10-11T21:53:56.359062971Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [cd71c6dbeef5b0a3d75f054eb6ed2e0d464eee0fd996adadf060f2b2aadb19eb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:60574 - 15613 "HINFO IN 8939377658686111799.416293902896406007. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022929998s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1011 21:48:38.871012       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-11 21:48:08.870429175 +0000 UTC m=+0.026578670) (total time: 30.00047861s):
	Trace[2019727887]: [30.00047861s] [30.00047861s] END
	E1011 21:48:38.871046       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1011 21:48:38.871134       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-11 21:48:08.870848528 +0000 UTC m=+0.026998041) (total time: 30.000274992s):
	Trace[939984059]: [30.000274992s] [30.000274992s] END
	E1011 21:48:38.871147       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1011 21:48:38.871436       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-11 21:48:08.870670897 +0000 UTC m=+0.026820392) (total time: 30.00072135s):
	Trace[911902081]: [30.00072135s] [30.00072135s] END
	E1011 21:48:38.871448       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [eace983c739ecacace71dab861321d9984cc1eaf2f004723d41e3c3df9b9e848] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:52118 - 42617 "HINFO IN 5427595718352740463.291438061973666831. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.042075939s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-310298
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-310298
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22fb0b41d4a12c6d1b6775ff06e33685efed0efd
	                    minikube.k8s.io/name=old-k8s-version-310298
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_11T21_45_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 11 Oct 2024 21:45:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-310298
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 11 Oct 2024 21:53:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 11 Oct 2024 21:53:59 +0000   Fri, 11 Oct 2024 21:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 11 Oct 2024 21:53:59 +0000   Fri, 11 Oct 2024 21:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 11 Oct 2024 21:53:59 +0000   Fri, 11 Oct 2024 21:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 11 Oct 2024 21:53:59 +0000   Fri, 11 Oct 2024 21:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-310298
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 10ce936736d34bc99b4d6626605ad436
	  System UUID:                cf702acc-8965-4d9c-a4f1-c08eec0769dc
	  Boot ID:                    d161fc74-b16f-4a64-ba04-769b77a65402
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 coredns-74ff55c5b-hvgz4                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m20s
	  kube-system                 etcd-old-k8s-version-310298                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m29s
	  kube-system                 kindnet-plhv2                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m20s
	  kube-system                 kube-apiserver-old-k8s-version-310298             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-old-k8s-version-310298    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-h6nvx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-scheduler-old-k8s-version-310298             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 metrics-server-9975d5f86-mv42d                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m35s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-ngf5b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-95mv9               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m48s (x5 over 8m48s)  kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m48s (x4 over 8m48s)  kubelet     Node old-k8s-version-310298 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m48s (x4 over 8m48s)  kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet     Node old-k8s-version-310298 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s                  kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m20s                  kubelet     Node old-k8s-version-310298 status is now: NodeReady
	  Normal  Starting                 8m19s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m6s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-310298 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)    kubelet     Node old-k8s-version-310298 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m53s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [82b03819cdb0cf5e629a3c850a370c0c9dcc4a01043c31a7138c9543ca887dab] <==
	2024-10-11 21:45:14.541734 I | embed: listening for peers on 192.168.76.2:2380
	raft2024/10/11 21:45:14 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/10/11 21:45:14 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/11 21:45:14 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/11 21:45:14 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/11 21:45:14 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-11 21:45:14.930990 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-11 21:45:14.937161 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-11 21:45:14.937303 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-11 21:45:14.937383 I | etcdserver: published {Name:old-k8s-version-310298 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-11 21:45:14.937555 I | embed: ready to serve client requests
	2024-10-11 21:45:14.939037 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-11 21:45:14.962294 I | embed: ready to serve client requests
	2024-10-11 21:45:14.974890 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-11 21:45:38.727722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:45:45.909216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:45:55.909202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:05.909217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:15.909148 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:25.909199 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:35.909299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:45.909298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:46:55.909226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:47:05.909342 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:47:15.915896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [9d1741ef83bdc1597791e23aa2dbfaa3da4b63b142942abfedb422423f2ad3e1] <==
	2024-10-11 21:49:58.319375 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:08.318024 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:18.318714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:28.316969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:38.318836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:48.319613 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:50:58.318701 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:08.318577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:18.322759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:28.320424 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:38.318396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:48.318776 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:51:58.318161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:08.318554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:18.322108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:28.320091 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:38.321149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:48.318986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:52:58.318068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:08.317110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:18.316928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:28.316801 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:38.318897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:48.321823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-11 21:53:58.329740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 21:54:01 up  5:36,  0 users,  load average: 1.67, 2.02, 2.43
	Linux old-k8s-version-310298 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8d9728707d00c6da7d7f8b96517388a46853ecdbf82443fd04c10c9cb040f68e] <==
	I1011 21:45:45.213927       1 controller.go:342] Waiting for informer caches to sync
	I1011 21:45:45.214029       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1011 21:45:45.602369       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1011 21:45:45.602399       1 metrics.go:61] Registering metrics
	I1011 21:45:45.602467       1 controller.go:378] Syncing nftables rules
	I1011 21:45:55.214394       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:45:55.214432       1 main.go:300] handling current node
	I1011 21:46:05.214337       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:05.214371       1 main.go:300] handling current node
	I1011 21:46:15.216504       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:15.216539       1 main.go:300] handling current node
	I1011 21:46:25.221006       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:25.221040       1 main.go:300] handling current node
	I1011 21:46:35.217946       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:35.217985       1 main.go:300] handling current node
	I1011 21:46:45.213825       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:45.213991       1 main.go:300] handling current node
	I1011 21:46:55.219002       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:46:55.219035       1 main.go:300] handling current node
	I1011 21:47:05.221740       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:47:05.221965       1 main.go:300] handling current node
	I1011 21:47:15.213929       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:47:15.214043       1 main.go:300] handling current node
	I1011 21:47:25.218876       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:47:25.218984       1 main.go:300] handling current node
	
	
	==> kindnet [eb8d210528a0383f35072a5ddcd9a7dcb9a5f98a9ecc94a98afec5fa18f3794e] <==
	I1011 21:51:58.811373       1 main.go:300] handling current node
	I1011 21:52:08.803201       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:08.803236       1 main.go:300] handling current node
	I1011 21:52:18.802735       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:18.802768       1 main.go:300] handling current node
	I1011 21:52:28.811204       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:28.811236       1 main.go:300] handling current node
	I1011 21:52:38.811650       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:38.811688       1 main.go:300] handling current node
	I1011 21:52:48.806338       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:48.806376       1 main.go:300] handling current node
	I1011 21:52:58.810557       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:52:58.810593       1 main.go:300] handling current node
	I1011 21:53:08.803653       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:08.803923       1 main.go:300] handling current node
	I1011 21:53:18.809089       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:18.809123       1 main.go:300] handling current node
	I1011 21:53:28.810400       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:28.810493       1 main.go:300] handling current node
	I1011 21:53:38.812374       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:38.812477       1 main.go:300] handling current node
	I1011 21:53:48.806348       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:48.806419       1 main.go:300] handling current node
	I1011 21:53:58.811577       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1011 21:53:58.811671       1 main.go:300] handling current node
	
	
	==> kube-apiserver [5af6525b2dd85430030e7cc30444924fa6fea466c5171a867eb920e17f7ea06b] <==
	I1011 21:45:22.027956       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1011 21:45:22.027985       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1011 21:45:22.041687       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1011 21:45:22.046114       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1011 21:45:22.046139       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1011 21:45:22.521290       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1011 21:45:22.567662       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1011 21:45:22.642427       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1011 21:45:22.643829       1 controller.go:606] quota admission added evaluator for: endpoints
	I1011 21:45:22.647696       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1011 21:45:23.685805       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1011 21:45:24.116012       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1011 21:45:24.187200       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1011 21:45:32.520740       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1011 21:45:41.410953       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1011 21:45:41.600137       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1011 21:45:48.429230       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:45:48.429275       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:45:48.429284       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1011 21:46:26.726989       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:46:26.727037       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:46:26.727072       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1011 21:47:07.601456       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:47:07.601503       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:47:07.601655       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [9b44e06318cd0e3c3376f4e614ce6eba6b9b0a10bc88659589618c2379685094] <==
	I1011 21:50:44.298243       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:50:44.298262       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1011 21:51:09.243770       1 handler_proxy.go:102] no RequestInfo found in the context
	E1011 21:51:09.244005       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1011 21:51:09.244023       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 21:51:25.638852       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:51:25.638898       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:51:25.638910       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1011 21:51:59.287489       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:51:59.287540       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:51:59.287549       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1011 21:52:43.065767       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:52:43.065814       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:52:43.065850       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1011 21:53:07.471319       1 handler_proxy.go:102] no RequestInfo found in the context
	E1011 21:53:07.471401       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1011 21:53:07.471409       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1011 21:53:22.667183       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:53:22.667228       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:53:22.667236       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1011 21:53:56.809203       1 client.go:360] parsed scheme: "passthrough"
	I1011 21:53:56.809261       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1011 21:53:56.809270       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [be825cbc77b19f2b69c1897a56d05f7e1432e0c3f6a170450b7e9260547b9bad] <==
	E1011 21:49:56.238533       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:50:03.916518       1 request.go:655] Throttling request took 1.048457758s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1011 21:50:04.767832       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:50:26.740396       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:50:36.418250       1 request.go:655] Throttling request took 1.048452978s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1011 21:50:37.269788       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:50:57.242085       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:51:08.920274       1 request.go:655] Throttling request took 1.04841554s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W1011 21:51:09.771636       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:51:27.743845       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:51:41.422106       1 request.go:655] Throttling request took 1.048286428s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1011 21:51:42.273947       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:51:58.245751       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:52:13.924362       1 request.go:655] Throttling request took 1.048415159s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1011 21:52:14.776252       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:52:28.747452       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:52:46.426664       1 request.go:655] Throttling request took 1.047990548s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1011 21:52:47.278452       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:52:59.249277       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:53:18.928832       1 request.go:655] Throttling request took 1.047916753s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W1011 21:53:19.780405       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:53:29.751488       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1011 21:53:51.430959       1 request.go:655] Throttling request took 1.04808107s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1011 21:53:52.282733       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1011 21:54:00.260928       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [ee8975aa67439f54b50ddec734804fad8105c189480e665798caf6f42c15e0bb] <==
	E1011 21:45:41.443389       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1011 21:45:41.458036       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-plhv2"
	I1011 21:45:41.472228       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6nvx"
	I1011 21:45:41.532394       1 shared_informer.go:247] Caches are synced for disruption 
	I1011 21:45:41.532414       1 disruption.go:339] Sending events to api server.
	E1011 21:45:41.549037       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c800cc1f-571f-45b8-9b71-1c6e22303004", ResourceVersion:"261", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864279924, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f7c1c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f7c1e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4000f7c200), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000511400), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f7c
220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f7c240), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f7c280)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001513f20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000468088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000bf7030), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002f2738)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40004680d8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1011 21:45:41.574371       1 shared_informer.go:247] Caches are synced for resource quota 
	I1011 21:45:41.582638       1 shared_informer.go:247] Caches are synced for resource quota 
	I1011 21:45:41.596598       1 shared_informer.go:247] Caches are synced for deployment 
	I1011 21:45:41.606967       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1011 21:45:41.640127       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lfcjx"
	I1011 21:45:41.669643       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hvgz4"
	I1011 21:45:41.735997       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1011 21:45:42.033857       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1011 21:45:42.033886       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1011 21:45:42.036209       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1011 21:45:43.190579       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1011 21:45:43.207364       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lfcjx"
	I1011 21:45:46.415622       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1011 21:45:46.416823       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-hvgz4" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-hvgz4"
	I1011 21:45:46.417026       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1011 21:45:46.417276       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-lfcjx" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-lfcjx"
	I1011 21:47:24.926505       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1011 21:47:25.104481       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E1011 21:47:25.139248       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [032ba2406be5b53ad4b1503e20ce668209b09e6f132934ab7bf58d738173ceb3] <==
	I1011 21:45:42.579755       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1011 21:45:42.579861       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1011 21:45:42.644833       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1011 21:45:42.648553       1 server_others.go:185] Using iptables Proxier.
	I1011 21:45:42.652275       1 server.go:650] Version: v1.20.0
	I1011 21:45:42.657001       1 config.go:315] Starting service config controller
	I1011 21:45:42.657024       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1011 21:45:42.657055       1 config.go:224] Starting endpoint slice config controller
	I1011 21:45:42.657059       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1011 21:45:42.757912       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1011 21:45:42.757966       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [bc1c5cd415f21ed7f979885aec87b92f194280615956f56d5270109b28587e31] <==
	I1011 21:48:08.717172       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1011 21:48:08.717237       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1011 21:48:08.744126       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1011 21:48:08.744220       1 server_others.go:185] Using iptables Proxier.
	I1011 21:48:08.744441       1 server.go:650] Version: v1.20.0
	I1011 21:48:08.744948       1 config.go:315] Starting service config controller
	I1011 21:48:08.744963       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1011 21:48:08.748758       1 config.go:224] Starting endpoint slice config controller
	I1011 21:48:08.748776       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1011 21:48:08.850378       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1011 21:48:08.850442       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [407bb72f6ddb3ef6abcccd96eee9928ae437f85741080668ed52eb6c588c6466] <==
	I1011 21:48:02.113475       1 serving.go:331] Generated self-signed cert in-memory
	W1011 21:48:06.402782       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 21:48:06.402813       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 21:48:06.402821       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 21:48:06.402829       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 21:48:06.614701       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:48:06.614941       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1011 21:48:06.615036       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1011 21:48:06.617771       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:48:06.718393       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [5fd8c746504928d62365956ab2c3f86bdd2a81622c1e2e9db463dd6c88e802d7] <==
	W1011 21:45:21.205854       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1011 21:45:21.205985       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1011 21:45:21.206027       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1011 21:45:21.206055       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1011 21:45:21.259665       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1011 21:45:21.260292       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:45:21.261048       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1011 21:45:21.261109       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1011 21:45:21.309897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1011 21:45:21.310184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1011 21:45:21.310416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:45:21.310623       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1011 21:45:21.310828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 21:45:21.311010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1011 21:45:21.311197       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1011 21:45:21.311386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1011 21:45:21.311547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1011 21:45:21.311718       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 21:45:21.311894       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1011 21:45:21.322506       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1011 21:45:22.163515       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1011 21:45:22.225733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1011 21:45:22.255952       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1011 21:45:22.278552       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1011 21:45:25.367628       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 11 21:52:32 old-k8s-version-310298 kubelet[666]: E1011 21:52:32.349985     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: I1011 21:52:38.349218     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:52:38 old-k8s-version-310298 kubelet[666]: E1011 21:52:38.349563     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:52:43 old-k8s-version-310298 kubelet[666]: E1011 21:52:43.350681     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: I1011 21:52:52.349144     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:52:52 old-k8s-version-310298 kubelet[666]: E1011 21:52:52.349495     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:52:54 old-k8s-version-310298 kubelet[666]: E1011 21:52:54.349999     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: I1011 21:53:04.349303     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:53:04 old-k8s-version-310298 kubelet[666]: E1011 21:53:04.349825     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:53:06 old-k8s-version-310298 kubelet[666]: E1011 21:53:06.349912     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: I1011 21:53:19.349600     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.350792     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:53:19 old-k8s-version-310298 kubelet[666]: E1011 21:53:19.351717     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:31 old-k8s-version-310298 kubelet[666]: E1011 21:53:31.350160     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: I1011 21:53:34.349142     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:53:34 old-k8s-version-310298 kubelet[666]: E1011 21:53:34.349933     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:53:42 old-k8s-version-310298 kubelet[666]: E1011 21:53:42.350138     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: I1011 21:53:45.362502     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:53:45 old-k8s-version-310298 kubelet[666]: E1011 21:53:45.362878     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	Oct 11 21:53:56 old-k8s-version-310298 kubelet[666]: E1011 21:53:56.359349     666 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 11 21:53:56 old-k8s-version-310298 kubelet[666]: E1011 21:53:56.359756     666 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 11 21:53:56 old-k8s-version-310298 kubelet[666]: E1011 21:53:56.360009     666 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-st267,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-mv42d_kube-system(7113950
1-3787-4496-8890-bd680ccc626f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 11 21:53:56 old-k8s-version-310298 kubelet[666]: E1011 21:53:56.360178     666 pod_workers.go:191] Error syncing pod 71139501-3787-4496-8890-bd680ccc626f ("metrics-server-9975d5f86-mv42d_kube-system(71139501-3787-4496-8890-bd680ccc626f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 11 21:53:59 old-k8s-version-310298 kubelet[666]: I1011 21:53:59.349237     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9cb89d1be38c8914419d8aa6b642e30a93d4aa21e9145c8bab31ededeb1f39e8
	Oct 11 21:53:59 old-k8s-version-310298 kubelet[666]: E1011 21:53:59.349558     666 pod_workers.go:191] Error syncing pod 3d0678ba-8e76-46d9-8aeb-698f533f48b0 ("dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ngf5b_kubernetes-dashboard(3d0678ba-8e76-46d9-8aeb-698f533f48b0)"
	
	
	==> kubernetes-dashboard [35b0c02d780d7351355845738e089f5981d8e7ff5a13f9e9ec54141909f163d3] <==
	2024/10/11 21:48:28 Starting overwatch
	2024/10/11 21:48:28 Using namespace: kubernetes-dashboard
	2024/10/11 21:48:28 Using in-cluster config to connect to apiserver
	2024/10/11 21:48:28 Using secret token for csrf signing
	2024/10/11 21:48:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/11 21:48:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/11 21:48:28 Successful initial request to the apiserver, version: v1.20.0
	2024/10/11 21:48:28 Generating JWE encryption key
	2024/10/11 21:48:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/11 21:48:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/11 21:48:29 Initializing JWE encryption key from synchronized object
	2024/10/11 21:48:29 Creating in-cluster Sidecar client
	2024/10/11 21:48:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:48:29 Serving insecurely on HTTP port: 9090
	2024/10/11 21:48:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:49:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:49:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:50:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:50:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:51:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:51:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:52:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:52:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:53:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/11 21:53:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2cecd3a9f831b615630c5193af2041617ad4eebd8ffa12e85429956ff3f06480] <==
	I1011 21:48:51.520799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1011 21:48:51.537901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1011 21:48:51.537947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1011 21:49:09.039724       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1011 21:49:09.042204       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-310298_eb05a977-57b7-411b-8a76-70aa612b0470!
	I1011 21:49:09.060563       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c5a5c54d-9e75-4d8f-92c9-5771cf26587a", APIVersion:"v1", ResourceVersion:"837", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-310298_eb05a977-57b7-411b-8a76-70aa612b0470 became leader
	I1011 21:49:09.143278       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-310298_eb05a977-57b7-411b-8a76-70aa612b0470!
	
	
	==> storage-provisioner [eedd8607f9f6af9bbe72480ba1ad5f4a01334692f9e8bc371db85619c6ac73f8] <==
	I1011 21:48:08.404197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1011 21:48:38.406922       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-310298 -n old-k8s-version-310298
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-310298 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-mv42d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-310298 describe pod metrics-server-9975d5f86-mv42d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-310298 describe pod metrics-server-9975d5f86-mv42d: exit status 1 (126.377236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-mv42d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-310298 describe pod metrics-server-9975d5f86-mv42d: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (386.00s)

                                                
                                    

Test pass (299/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 6.62
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 151.96
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/PullSecret 9.91
34 TestAddons/parallel/Registry 14.85
35 TestAddons/parallel/Ingress 18.93
36 TestAddons/parallel/InspektorGadget 11.72
37 TestAddons/parallel/MetricsServer 6.92
39 TestAddons/parallel/CSI 59.5
40 TestAddons/parallel/Headlamp 15.82
41 TestAddons/parallel/CloudSpanner 5.7
42 TestAddons/parallel/LocalPath 51.86
43 TestAddons/parallel/NvidiaDevicePlugin 6.96
44 TestAddons/parallel/Yakd 11.78
46 TestAddons/StoppedEnableDisable 12.24
47 TestCertOptions 32.76
48 TestCertExpiration 229.79
50 TestForceSystemdFlag 40.58
51 TestForceSystemdEnv 48.51
52 TestDockerEnvContainerd 47.43
57 TestErrorSpam/setup 32.39
58 TestErrorSpam/start 0.74
59 TestErrorSpam/status 1.1
60 TestErrorSpam/pause 1.76
61 TestErrorSpam/unpause 1.87
62 TestErrorSpam/stop 1.47
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 50.72
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 6.49
69 TestFunctional/serial/KubeContext 0.07
70 TestFunctional/serial/KubectlGetPods 0.11
73 TestFunctional/serial/CacheCmd/cache/add_remote 4.18
74 TestFunctional/serial/CacheCmd/cache/add_local 1.25
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
76 TestFunctional/serial/CacheCmd/cache/list 0.06
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
78 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
79 TestFunctional/serial/CacheCmd/cache/delete 0.12
80 TestFunctional/serial/MinikubeKubectlCmd 0.14
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
82 TestFunctional/serial/ExtraConfig 62.25
83 TestFunctional/serial/ComponentHealth 0.09
84 TestFunctional/serial/LogsCmd 1.7
85 TestFunctional/serial/LogsFileCmd 1.69
86 TestFunctional/serial/InvalidService 4.35
88 TestFunctional/parallel/ConfigCmd 0.48
89 TestFunctional/parallel/DashboardCmd 7.77
90 TestFunctional/parallel/DryRun 0.39
91 TestFunctional/parallel/InternationalLanguage 0.24
92 TestFunctional/parallel/StatusCmd 1.22
96 TestFunctional/parallel/ServiceCmdConnect 10.66
97 TestFunctional/parallel/AddonsCmd 0.18
98 TestFunctional/parallel/PersistentVolumeClaim 25.17
100 TestFunctional/parallel/SSHCmd 0.69
101 TestFunctional/parallel/CpCmd 2.19
103 TestFunctional/parallel/FileSync 0.27
104 TestFunctional/parallel/CertSync 2.44
108 TestFunctional/parallel/NodeLabels 0.1
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.96
112 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.73
126 TestFunctional/parallel/ServiceCmd/List 0.7
127 TestFunctional/parallel/ProfileCmd/profile_list 0.5
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
131 TestFunctional/parallel/MountCmd/any-port 7.3
132 TestFunctional/parallel/ServiceCmd/Format 0.49
133 TestFunctional/parallel/ServiceCmd/URL 0.41
134 TestFunctional/parallel/MountCmd/specific-port 2.22
135 TestFunctional/parallel/MountCmd/VerifyCleanup 2.93
136 TestFunctional/parallel/Version/short 0.09
137 TestFunctional/parallel/Version/components 1.34
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
143 TestFunctional/parallel/ImageCommands/Setup 0.75
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.81
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.3
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.56
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 115.65
161 TestMultiControlPlane/serial/DeployApp 33.3
162 TestMultiControlPlane/serial/PingHostFromPods 1.66
163 TestMultiControlPlane/serial/AddWorkerNode 27.52
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.06
166 TestMultiControlPlane/serial/CopyFile 19.1
167 TestMultiControlPlane/serial/StopSecondaryNode 12.96
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.98
169 TestMultiControlPlane/serial/RestartSecondaryNode 18.27
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.63
172 TestMultiControlPlane/serial/DeleteSecondaryNode 10.49
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
174 TestMultiControlPlane/serial/StopCluster 36.11
175 TestMultiControlPlane/serial/RestartCluster 77.01
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
177 TestMultiControlPlane/serial/AddSecondaryNode 42.94
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
182 TestJSONOutput/start/Command 60.37
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.72
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.67
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.73
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.23
207 TestKicCustomNetwork/create_custom_network 38.39
208 TestKicCustomNetwork/use_default_bridge_network 31.55
209 TestKicExistingNetwork 32.8
210 TestKicCustomSubnet 35.31
211 TestKicStaticIP 32.1
212 TestMainNoArgs 0.08
213 TestMinikubeProfile 68.2
216 TestMountStart/serial/StartWithMountFirst 6.12
217 TestMountStart/serial/VerifyMountFirst 0.27
218 TestMountStart/serial/StartWithMountSecond 6.42
219 TestMountStart/serial/VerifyMountSecond 0.26
220 TestMountStart/serial/DeleteFirst 1.61
221 TestMountStart/serial/VerifyMountPostDelete 0.28
222 TestMountStart/serial/Stop 1.21
223 TestMountStart/serial/RestartStopped 7.89
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 69.23
228 TestMultiNode/serial/DeployApp2Nodes 17.58
229 TestMultiNode/serial/PingHostFrom2Pods 1.01
230 TestMultiNode/serial/AddNode 16.43
231 TestMultiNode/serial/MultiNodeLabels 0.1
232 TestMultiNode/serial/ProfileList 0.71
233 TestMultiNode/serial/CopyFile 10.1
234 TestMultiNode/serial/StopNode 2.23
235 TestMultiNode/serial/StartAfterStop 10
236 TestMultiNode/serial/RestartKeepsNodes 98.03
237 TestMultiNode/serial/DeleteNode 5.53
238 TestMultiNode/serial/StopMultiNode 24.13
239 TestMultiNode/serial/RestartMultiNode 52.58
240 TestMultiNode/serial/ValidateNameConflict 33.13
245 TestPreload 108.44
247 TestScheduledStopUnix 110.71
250 TestInsufficientStorage 9.93
251 TestRunningBinaryUpgrade 85.1
253 TestKubernetesUpgrade 349.58
254 TestMissingContainerUpgrade 191.6
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestNoKubernetes/serial/StartWithK8s 40.45
258 TestNoKubernetes/serial/StartWithStopK8s 8.96
259 TestNoKubernetes/serial/Start 6.2
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
261 TestNoKubernetes/serial/ProfileList 1.73
262 TestNoKubernetes/serial/Stop 1.94
263 TestNoKubernetes/serial/StartNoArgs 7.76
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
265 TestStoppedBinaryUpgrade/Setup 0.85
266 TestStoppedBinaryUpgrade/Upgrade 111.81
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
276 TestPause/serial/Start 70.53
280 TestPause/serial/SecondStartNoReconfiguration 6.84
285 TestNetworkPlugins/group/false 5.14
289 TestPause/serial/Pause 0.94
290 TestPause/serial/VerifyStatus 0.41
291 TestPause/serial/Unpause 0.86
292 TestPause/serial/PauseAgain 1.22
293 TestPause/serial/DeletePaused 3.38
294 TestPause/serial/VerifyDeletedResources 0.18
296 TestStartStop/group/old-k8s-version/serial/FirstStart 153.45
298 TestStartStop/group/no-preload/serial/FirstStart 75.54
299 TestStartStop/group/old-k8s-version/serial/DeployApp 12.21
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
301 TestStartStop/group/old-k8s-version/serial/Stop 12.35
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
304 TestStartStop/group/no-preload/serial/DeployApp 10.45
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
306 TestStartStop/group/no-preload/serial/Stop 12.09
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/no-preload/serial/SecondStart 266.58
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
312 TestStartStop/group/no-preload/serial/Pause 3.1
314 TestStartStop/group/embed-certs/serial/FirstStart 63.35
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/old-k8s-version/serial/Pause 2.84
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.32
321 TestStartStop/group/embed-certs/serial/DeployApp 10.41
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
323 TestStartStop/group/embed-certs/serial/Stop 12.28
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/embed-certs/serial/SecondStart 266.55
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.41
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.73
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.15
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
334 TestStartStop/group/embed-certs/serial/Pause 3.18
336 TestStartStop/group/newest-cni/serial/FirstStart 38.27
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.74
341 TestNetworkPlugins/group/auto/Start 60.89
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
344 TestStartStop/group/newest-cni/serial/Stop 3.15
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
346 TestStartStop/group/newest-cni/serial/SecondStart 25.23
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
350 TestStartStop/group/newest-cni/serial/Pause 4.1
351 TestNetworkPlugins/group/kindnet/Start 62.38
352 TestNetworkPlugins/group/auto/KubeletFlags 0.42
353 TestNetworkPlugins/group/auto/NetCatPod 9.43
354 TestNetworkPlugins/group/auto/DNS 0.2
355 TestNetworkPlugins/group/auto/Localhost 0.15
356 TestNetworkPlugins/group/auto/HairPin 0.21
357 TestNetworkPlugins/group/calico/Start 67.49
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
360 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
361 TestNetworkPlugins/group/kindnet/DNS 0.24
362 TestNetworkPlugins/group/kindnet/Localhost 0.2
363 TestNetworkPlugins/group/kindnet/HairPin 0.21
364 TestNetworkPlugins/group/custom-flannel/Start 57.18
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.41
367 TestNetworkPlugins/group/calico/NetCatPod 9.34
368 TestNetworkPlugins/group/calico/DNS 0.31
369 TestNetworkPlugins/group/calico/Localhost 0.26
370 TestNetworkPlugins/group/calico/HairPin 0.24
371 TestNetworkPlugins/group/enable-default-cni/Start 88.45
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
374 TestNetworkPlugins/group/custom-flannel/DNS 0.28
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
377 TestNetworkPlugins/group/flannel/Start 51.58
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.32
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
382 TestNetworkPlugins/group/flannel/NetCatPod 9.34
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
386 TestNetworkPlugins/group/flannel/DNS 0.23
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.23
389 TestNetworkPlugins/group/bridge/Start 71.44
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
391 TestNetworkPlugins/group/bridge/NetCatPod 12.27
392 TestNetworkPlugins/group/bridge/DNS 0.17
393 TestNetworkPlugins/group/bridge/Localhost 0.14
394 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-347006 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-347006 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.671663301s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1011 20:57:55.935543  875861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1011 20:57:55.935622  875861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-347006
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-347006: exit status 85 (79.110594ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-347006 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |          |
	|         | -p download-only-347006        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:57:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:57:48.312268  875866 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:57:48.312734  875866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:48.312749  875866 out.go:358] Setting ErrFile to fd 2...
	I1011 20:57:48.312756  875866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:48.313017  875866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	W1011 20:57:48.313172  875866 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19749-870468/.minikube/config/config.json: open /home/jenkins/minikube-integration/19749-870468/.minikube/config/config.json: no such file or directory
	I1011 20:57:48.313592  875866 out.go:352] Setting JSON to true
	I1011 20:57:48.314466  875866 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16816,"bootTime":1728663453,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 20:57:48.314539  875866 start.go:139] virtualization:  
	I1011 20:57:48.317409  875866 out.go:97] [download-only-347006] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1011 20:57:48.317597  875866 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball: no such file or directory
	I1011 20:57:48.317637  875866 notify.go:220] Checking for updates...
	I1011 20:57:48.319218  875866 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:57:48.321336  875866 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:57:48.323041  875866 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 20:57:48.324973  875866 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 20:57:48.327289  875866 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1011 20:57:48.331041  875866 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:57:48.331411  875866 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:57:48.352252  875866 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:57:48.352372  875866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:48.420479  875866 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 20:57:48.411083437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:48.420597  875866 docker.go:318] overlay module found
	I1011 20:57:48.422940  875866 out.go:97] Using the docker driver based on user configuration
	I1011 20:57:48.422968  875866 start.go:297] selected driver: docker
	I1011 20:57:48.422975  875866 start.go:901] validating driver "docker" against <nil>
	I1011 20:57:48.423084  875866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:48.469206  875866 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 20:57:48.46020705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:48.469408  875866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:57:48.469701  875866 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1011 20:57:48.469859  875866 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:57:48.472282  875866 out.go:169] Using Docker driver with root privileges
	I1011 20:57:48.474399  875866 cni.go:84] Creating CNI manager for ""
	I1011 20:57:48.474469  875866 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 20:57:48.474482  875866 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:57:48.474573  875866 start.go:340] cluster config:
	{Name:download-only-347006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-347006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:57:48.476633  875866 out.go:97] Starting "download-only-347006" primary control-plane node in "download-only-347006" cluster
	I1011 20:57:48.476652  875866 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1011 20:57:48.478548  875866 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:57:48.478574  875866 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1011 20:57:48.478736  875866 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:57:48.494148  875866 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:48.494839  875866 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:57:48.494946  875866 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:48.589995  875866 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1011 20:57:48.590025  875866 cache.go:56] Caching tarball of preloaded images
	I1011 20:57:48.590771  875866 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1011 20:57:48.593363  875866 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1011 20:57:48.593385  875866 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1011 20:57:48.679300  875866 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1011 20:57:53.296739  875866 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	
	
	* The control-plane node download-only-347006 host does not exist
	  To start a cluster, run: "minikube start -p download-only-347006"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-347006
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-287405 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-287405 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.619361279s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1011 20:58:02.997033  875861 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1011 20:58:02.997079  875861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-287405
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-287405: exit status 85 (82.235516ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-347006 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-347006        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| delete  | -p download-only-347006        | download-only-347006 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC | 11 Oct 24 20:57 UTC |
	| start   | -o=json --download-only        | download-only-287405 | jenkins | v1.34.0 | 11 Oct 24 20:57 UTC |                     |
	|         | -p download-only-287405        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/11 20:57:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1011 20:57:56.427637  876072 out.go:345] Setting OutFile to fd 1 ...
	I1011 20:57:56.427767  876072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:56.427777  876072 out.go:358] Setting ErrFile to fd 2...
	I1011 20:57:56.427782  876072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 20:57:56.428014  876072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 20:57:56.428393  876072 out.go:352] Setting JSON to true
	I1011 20:57:56.429204  876072 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16824,"bootTime":1728663453,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 20:57:56.429270  876072 start.go:139] virtualization:  
	I1011 20:57:56.432364  876072 out.go:97] [download-only-287405] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 20:57:56.432613  876072 notify.go:220] Checking for updates...
	I1011 20:57:56.434305  876072 out.go:169] MINIKUBE_LOCATION=19749
	I1011 20:57:56.436216  876072 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 20:57:56.437975  876072 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 20:57:56.439860  876072 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 20:57:56.441687  876072 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1011 20:57:56.445024  876072 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1011 20:57:56.445295  876072 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 20:57:56.465060  876072 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 20:57:56.465179  876072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:56.525483  876072 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-11 20:57:56.516364379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:56.525602  876072 docker.go:318] overlay module found
	I1011 20:57:56.527581  876072 out.go:97] Using the docker driver based on user configuration
	I1011 20:57:56.527614  876072 start.go:297] selected driver: docker
	I1011 20:57:56.527623  876072 start.go:901] validating driver "docker" against <nil>
	I1011 20:57:56.527739  876072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 20:57:56.574306  876072 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-11 20:57:56.565236457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 20:57:56.574514  876072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1011 20:57:56.574812  876072 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1011 20:57:56.574976  876072 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1011 20:57:56.577869  876072 out.go:169] Using Docker driver with root privileges
	I1011 20:57:56.579480  876072 cni.go:84] Creating CNI manager for ""
	I1011 20:57:56.579544  876072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1011 20:57:56.579558  876072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1011 20:57:56.579629  876072 start.go:340] cluster config:
	{Name:download-only-287405 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-287405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 20:57:56.581459  876072 out.go:97] Starting "download-only-287405" primary control-plane node in "download-only-287405" cluster
	I1011 20:57:56.581479  876072 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1011 20:57:56.583432  876072 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1011 20:57:56.583462  876072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 20:57:56.583634  876072 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1011 20:57:56.598993  876072 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1011 20:57:56.599119  876072 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1011 20:57:56.599144  876072 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1011 20:57:56.599153  876072 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1011 20:57:56.599161  876072 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1011 20:57:56.643369  876072 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1011 20:57:56.643401  876072 cache.go:56] Caching tarball of preloaded images
	I1011 20:57:56.644131  876072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1011 20:57:56.646221  876072 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1011 20:57:56.646247  876072 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1011 20:57:56.739440  876072 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19749-870468/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-287405 host does not exist
	  To start a cluster, run: "minikube start -p download-only-287405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-287405
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1011 20:58:04.249851  875861 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-775051 --alsologtostderr --binary-mirror http://127.0.0.1:38901 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-775051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-775051
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-652898
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-652898: exit status 85 (78.7371ms)

                                                
                                                
-- stdout --
	* Profile "addons-652898" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-652898"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-652898
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-652898: exit status 85 (80.79957ms)

                                                
                                                
-- stdout --
	* Profile "addons-652898" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-652898"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (151.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-652898 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-652898 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m31.957238636s)
--- PASS: TestAddons/Setup (151.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-652898 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-652898 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (9.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-652898 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-652898 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0727cbaa-1943-463c-aefd-ad81cf75e790] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0727cbaa-1943-463c-aefd-ad81cf75e790] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 9.00365912s
addons_test.go:633: (dbg) Run:  kubectl --context addons-652898 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-652898 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-652898 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-652898 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (9.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.203608ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vzmnb" [f153018e-71f1-4acd-b221-2ba610df9d84] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007545933s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mmdzc" [432e8bfd-ef31-4233-8cd7-b002f2475dae] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003907072s
addons_test.go:331: (dbg) Run:  kubectl --context addons-652898 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-652898 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-652898 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.876318669s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 ip
2024/10/11 21:04:41 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-652898 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-652898 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-652898 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7db8e7e1-6172-4cea-a140-4bf1badbfa8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7db8e7e1-6172-4cea-a140-4bf1badbfa8e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.053775465s
I1011 21:06:00.255600  875861 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-652898 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable ingress-dns --alsologtostderr -v=1: (1.858128702s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable ingress --alsologtostderr -v=1: (7.807464055s)
--- PASS: TestAddons/parallel/Ingress (18.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-84nrf" [f6869c29-44bd-49a7-a549-e9a3b7ac07b9] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0043162s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable inspektor-gadget --alsologtostderr -v=1: (5.718590728s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.826767ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qb7lm" [a5df8f56-ef8b-4756-bcd0-d8d56155142f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004596724s
addons_test.go:402: (dbg) Run:  kubectl --context addons-652898 top pods -n kube-system
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1011 21:05:07.409664  875861 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1011 21:05:07.415626  875861 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1011 21:05:07.415660  875861 kapi.go:107] duration metric: took 8.315476ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.32748ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-652898 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-652898 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2209a455-a7ef-4039-9ae5-d0a1884a351a] Pending
helpers_test.go:344: "task-pv-pod" [2209a455-a7ef-4039-9ae5-d0a1884a351a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2209a455-a7ef-4039-9ae5-d0a1884a351a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003948801s
addons_test.go:511: (dbg) Run:  kubectl --context addons-652898 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-652898 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-652898 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-652898 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-652898 delete pod task-pv-pod: (1.188959456s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-652898 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-652898 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-652898 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [db0a6f4f-a576-4094-9159-fd1814ea6e0f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [db0a6f4f-a576-4094-9159-fd1814ea6e0f] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003868957s
addons_test.go:553: (dbg) Run:  kubectl --context addons-652898 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-652898 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-652898 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.387837605s)
--- PASS: TestAddons/parallel/CSI (59.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-652898 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-652898 --alsologtostderr -v=1: (1.052049148s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8h2cc" [8c274794-3c41-4a7f-85da-6b43be9b6175] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-8h2cc" [8c274794-3c41-4a7f-85da-6b43be9b6175] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8h2cc" [8c274794-3c41-4a7f-85da-6b43be9b6175] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004593056s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable headlamp --alsologtostderr -v=1: (5.764058997s)
--- PASS: TestAddons/parallel/Headlamp (15.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-fk7r7" [2a08aab0-90e6-4e78-a3f7-44d0e4c36a49] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00508996s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-652898 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-652898 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d78daa51-a224-4c94-a44e-1d8295c578ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d78daa51-a224-4c94-a44e-1d8295c578ae] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d78daa51-a224-4c94-a44e-1d8295c578ae] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003586947s
addons_test.go:902: (dbg) Run:  kubectl --context addons-652898 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 ssh "cat /opt/local-path-provisioner/pvc-172028b0-b8a7-4fac-9560-4ab4972cb702_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-652898 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-652898 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.551036253s)
--- PASS: TestAddons/parallel/LocalPath (51.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rkj87" [99c05700-f3f9-40c0-a106-a77ad2e167e3] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003146697s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4cxxt" [2c3e31b5-0459-4be2-9712-af1634ca3f01] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00400798s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-652898 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-652898 addons disable yakd --alsologtostderr -v=1: (5.775792969s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-652898
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-652898: (11.980647423s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-652898
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-652898
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-652898
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (32.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-549123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-549123 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.097545945s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-549123 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-549123 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-549123 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-549123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-549123
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-549123: (2.006933916s)
--- PASS: TestCertOptions (32.76s)

                                                
                                    
x
+
TestCertExpiration (229.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-428232 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1011 21:43:39.968825  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-428232 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.521734803s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-428232 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-428232 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.828637019s)
helpers_test.go:175: Cleaning up "cert-expiration-428232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-428232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-428232: (2.43495421s)
--- PASS: TestCertExpiration (229.79s)

                                                
                                    
x
+
TestForceSystemdFlag (40.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-959187 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-959187 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.259508352s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-959187 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-959187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-959187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-959187: (2.014020545s)
--- PASS: TestForceSystemdFlag (40.58s)

                                                
                                    
x
+
TestForceSystemdEnv (48.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-429719 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-429719 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.411002096s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-429719 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-429719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-429719
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-429719: (2.583790355s)
--- PASS: TestForceSystemdEnv (48.51s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.43s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-713913 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-713913 --driver=docker  --container-runtime=containerd: (31.813572033s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-713913"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Fh4gDceW5zcm/agent.897704" SSH_AGENT_PID="897705" DOCKER_HOST=ssh://docker@127.0.0.1:33878 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Fh4gDceW5zcm/agent.897704" SSH_AGENT_PID="897705" DOCKER_HOST=ssh://docker@127.0.0.1:33878 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Fh4gDceW5zcm/agent.897704" SSH_AGENT_PID="897705" DOCKER_HOST=ssh://docker@127.0.0.1:33878 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.209563058s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Fh4gDceW5zcm/agent.897704" SSH_AGENT_PID="897705" DOCKER_HOST=ssh://docker@127.0.0.1:33878 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-713913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-713913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-713913: (1.942593038s)
--- PASS: TestDockerEnvContainerd (47.43s)

                                                
                                    
x
+
TestErrorSpam/setup (32.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-174891 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-174891 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-174891 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-174891 --driver=docker  --container-runtime=containerd: (32.390254334s)
--- PASS: TestErrorSpam/setup (32.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 stop: (1.260689074s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-174891 --log_dir /tmp/nospam-174891 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19749-870468/.minikube/files/etc/test/nested/copy/875861/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-807114 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.716071454s)
--- PASS: TestFunctional/serial/StartWithProxy (50.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.49s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1011 21:08:52.613596  875861 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-807114 --alsologtostderr -v=8: (6.487777793s)
functional_test.go:663: soft start took 6.491786503s for "functional-807114" cluster.
I1011 21:08:59.102096  875861 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.49s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-807114 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:3.1: (1.576565313s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:3.3: (1.416036443s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 cache add registry.k8s.io/pause:latest: (1.188798944s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-807114 /tmp/TestFunctionalserialCacheCmdcacheadd_local2127658193/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache add minikube-local-cache-test:functional-807114
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache delete minikube-local-cache-test:functional-807114
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-807114
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.312958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 cache reload: (1.11990807s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 kubectl -- --context functional-807114 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-807114 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-807114 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.24578143s)
functional_test.go:761: restart took 1m2.245892289s for "functional-807114" cluster.
I1011 21:10:09.840211  875861 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (62.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-807114 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 logs: (1.698479259s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 logs --file /tmp/TestFunctionalserialLogsFileCmd3397843535/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 logs --file /tmp/TestFunctionalserialLogsFileCmd3397843535/001/logs.txt: (1.692551832s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-807114 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-807114
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-807114: exit status 115 (584.906574ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31776 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-807114 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 config get cpus: exit status 14 (84.98977ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 config get cpus: exit status 14 (70.624689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-807114 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-807114 --alsologtostderr -v=1] ...
E1011 21:10:57.396910  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:508: unable to kill pid 912475: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-807114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (173.676173ms)

                                                
                                                
-- stdout --
	* [functional-807114] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:10:49.507562  912173 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:10:49.507705  912173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:10:49.507715  912173 out.go:358] Setting ErrFile to fd 2...
	I1011 21:10:49.507720  912173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:10:49.507938  912173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:10:49.508277  912173 out.go:352] Setting JSON to false
	I1011 21:10:49.509224  912173 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17597,"bootTime":1728663453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 21:10:49.509294  912173 start.go:139] virtualization:  
	I1011 21:10:49.511455  912173 out.go:177] * [functional-807114] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:10:49.513683  912173 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:10:49.513863  912173 notify.go:220] Checking for updates...
	I1011 21:10:49.517112  912173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:10:49.518694  912173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:10:49.520345  912173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 21:10:49.521777  912173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:10:49.523464  912173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:10:49.525724  912173 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:10:49.526571  912173 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:10:49.551917  912173 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:10:49.552041  912173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:10:49.605870  912173 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:10:49.595955636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:10:49.605983  912173 docker.go:318] overlay module found
	I1011 21:10:49.608237  912173 out.go:177] * Using the docker driver based on existing profile
	I1011 21:10:49.610078  912173 start.go:297] selected driver: docker
	I1011 21:10:49.610092  912173 start.go:901] validating driver "docker" against &{Name:functional-807114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-807114 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:10:49.610205  912173 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:10:49.612661  912173 out.go:201] 
	W1011 21:10:49.614153  912173 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1011 21:10:49.616185  912173 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-807114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-807114 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (239.051224ms)

                                                
                                                
-- stdout --
	* [functional-807114] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:10:49.281414  912067 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:10:49.281817  912067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:10:49.281830  912067 out.go:358] Setting ErrFile to fd 2...
	I1011 21:10:49.281836  912067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:10:49.282656  912067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:10:49.283190  912067 out.go:352] Setting JSON to false
	I1011 21:10:49.284309  912067 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":17597,"bootTime":1728663453,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 21:10:49.284547  912067 start.go:139] virtualization:  
	I1011 21:10:49.291849  912067 out.go:177] * [functional-807114] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1011 21:10:49.293840  912067 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:10:49.293963  912067 notify.go:220] Checking for updates...
	I1011 21:10:49.298178  912067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:10:49.300593  912067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:10:49.302727  912067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 21:10:49.304469  912067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:10:49.306892  912067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:10:49.309626  912067 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:10:49.310139  912067 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:10:49.350479  912067 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:10:49.350605  912067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:10:49.431984  912067 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-11 21:10:49.421984203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:10:49.432100  912067 docker.go:318] overlay module found
	I1011 21:10:49.434609  912067 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1011 21:10:49.436270  912067 start.go:297] selected driver: docker
	I1011 21:10:49.436285  912067 start.go:901] validating driver "docker" against &{Name:functional-807114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-807114 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1011 21:10:49.436383  912067 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:10:49.438815  912067 out.go:201] 
	W1011 21:10:49.440367  912067 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1011 21:10:49.441956  912067 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-807114 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-807114 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-s28vs" [b600d821-5305-4640-b118-5cd279e8e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-s28vs" [b600d821-5305-4640-b118-5cd279e8e6e7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004553292s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31718
functional_test.go:1675: http://192.168.49.2:31718: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-s28vs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31718
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cc013648-96ec-44a8-b30d-74d93c9b2fbb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003389972s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-807114 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-807114 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-807114 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-807114 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c761511-ee74-46b7-9430-92e0d524958c] Pending
helpers_test.go:344: "sp-pod" [6c761511-ee74-46b7-9430-92e0d524958c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c761511-ee74-46b7-9430-92e0d524958c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00589272s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-807114 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-807114 delete -f testdata/storage-provisioner/pod.yaml
E1011 21:10:36.901492  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:36.907864  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:36.919283  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:36.940690  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:36.982094  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:37.063491  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:37.225120  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:10:37.546751  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-807114 delete -f testdata/storage-provisioner/pod.yaml: (1.175178976s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-807114 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23c37096-4741-4776-a125-a1f6d4f20a06] Pending
E1011 21:10:38.188430  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [23c37096-4741-4776-a125-a1f6d4f20a06] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003304877s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-807114 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh -n functional-807114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cp functional-807114:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2759348597/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh -n functional-807114 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh -n functional-807114 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/875861/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /etc/test/nested/copy/875861/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/875861.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /etc/ssl/certs/875861.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/875861.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /usr/share/ca-certificates/875861.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8758612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /etc/ssl/certs/8758612.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8758612.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /usr/share/ca-certificates/8758612.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-807114 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "sudo systemctl is-active docker": exit status 1 (536.851425ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "sudo systemctl is-active crio": exit status 1 (420.135813ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 909623: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-807114 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [072021ff-14bb-4609-b671-b4c712ac280b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [072021ff-14bb-4609-b671-b4c712ac280b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004123023s
I1011 21:10:27.912578  875861 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-807114 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.173.168 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-807114 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-807114 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-807114 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-l8n4g" [5d1f46f7-135a-4f19-9db0-14cc52d680b3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1011 21:10:39.470458  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-64b4f8f9ff-l8n4g" [5d1f46f7-135a-4f19-9db0-14cc52d680b3] Running
E1011 21:10:42.032647  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005414354s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "408.066702ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "94.822264ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service list -o json
functional_test.go:1494: Took "654.733239ms" to run "out/minikube-linux-arm64 -p functional-807114 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "428.138208ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "111.362735ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32750
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdany-port2965489723/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728681046812514401" to /tmp/TestFunctionalparallelMountCmdany-port2965489723/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728681046812514401" to /tmp/TestFunctionalparallelMountCmdany-port2965489723/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728681046812514401" to /tmp/TestFunctionalparallelMountCmdany-port2965489723/001/test-1728681046812514401
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.530317ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:10:47.285091  875861 retry.go:31] will retry after 404.186352ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 11 21:10 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 11 21:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 11 21:10 test-1728681046812514401
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh cat /mount-9p/test-1728681046812514401
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-807114 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fd4e655f-360a-4a24-b818-f8d4c132eb37] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fd4e655f-360a-4a24-b818-f8d4c132eb37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fd4e655f-360a-4a24-b818-f8d4c132eb37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003331436s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-807114 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdany-port2965489723/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service hello-node --url --format={{.IP}}
E1011 21:10:47.154822  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32750
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdspecific-port924034526/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (531.307878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:10:54.645329  875861 retry.go:31] will retry after 347.218376ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdspecific-port924034526/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "sudo umount -f /mount-9p": exit status 1 (409.958698ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-807114 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdspecific-port924034526/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T" /mount1: exit status 1 (928.500709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1011 21:10:57.262258  875861 retry.go:31] will retry after 700.04358ms: exit status 1
2024/10/11 21:10:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-807114 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-807114 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3912171385/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 version -o=json --components: (1.339705067s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-807114 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-807114
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-807114
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-807114 image ls --format short --alsologtostderr:
I1011 21:11:06.353143  915026 out.go:345] Setting OutFile to fd 1 ...
I1011 21:11:06.353340  915026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.353368  915026 out.go:358] Setting ErrFile to fd 2...
I1011 21:11:06.353400  915026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.353735  915026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
I1011 21:11:06.354534  915026 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.354732  915026 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.355315  915026 cli_runner.go:164] Run: docker container inspect functional-807114 --format={{.State.Status}}
I1011 21:11:06.376309  915026 ssh_runner.go:195] Run: systemctl --version
I1011 21:11:06.376347  915026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-807114
I1011 21:11:06.399584  915026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/functional-807114/id_rsa Username:docker}
I1011 21:11:06.491046  915026 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-807114 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| docker.io/kicbase/echo-server               | functional-807114  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/minikube-local-cache-test | functional-807114  | sha256:b9e827 | 989B   |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-807114 image ls --format table --alsologtostderr:
I1011 21:11:06.937074  915179 out.go:345] Setting OutFile to fd 1 ...
I1011 21:11:06.937198  915179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.937225  915179 out.go:358] Setting ErrFile to fd 2...
I1011 21:11:06.937230  915179 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.937480  915179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
I1011 21:11:06.940365  915179 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.940509  915179 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.941064  915179 cli_runner.go:164] Run: docker container inspect functional-807114 --format={{.State.Status}}
I1011 21:11:06.960096  915179 ssh_runner.go:195] Run: systemctl --version
I1011 21:11:06.960160  915179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-807114
I1011 21:11:06.988724  915179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/functional-807114/id_rsa Username:docker}
I1011 21:11:07.083091  915179 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-807114 image ls --format json --alsologtostderr:
[{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b9e827016e19102b1b1fab2260dd900dabd5d018ca19942332c6f8f5134b9d53","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-807114"],"size":"989"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],
"size":"249461"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7
edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-807114"],"size":"2173567"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656
a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e0
5b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-807114 image ls --format json --alsologtostderr:
I1011 21:11:06.661251  915096 out.go:345] Setting OutFile to fd 1 ...
I1011 21:11:06.661673  915096 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.661683  915096 out.go:358] Setting ErrFile to fd 2...
I1011 21:11:06.661698  915096 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.661949  915096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
I1011 21:11:06.662758  915096 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.662898  915096 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.663443  915096 cli_runner.go:164] Run: docker container inspect functional-807114 --format={{.State.Status}}
I1011 21:11:06.686803  915096 ssh_runner.go:195] Run: systemctl --version
I1011 21:11:06.686869  915096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-807114
I1011 21:11:06.709515  915096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/functional-807114/id_rsa Username:docker}
I1011 21:11:06.811281  915096 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-807114 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-807114
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b9e827016e19102b1b1fab2260dd900dabd5d018ca19942332c6f8f5134b9d53
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-807114
size: "989"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-807114 image ls --format yaml --alsologtostderr:
I1011 21:11:06.356421  915027 out.go:345] Setting OutFile to fd 1 ...
I1011 21:11:06.356530  915027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.356541  915027 out.go:358] Setting ErrFile to fd 2...
I1011 21:11:06.356547  915027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.356773  915027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
I1011 21:11:06.357403  915027 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.357566  915027 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.358119  915027 cli_runner.go:164] Run: docker container inspect functional-807114 --format={{.State.Status}}
I1011 21:11:06.375956  915027 ssh_runner.go:195] Run: systemctl --version
I1011 21:11:06.376019  915027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-807114
I1011 21:11:06.398396  915027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/functional-807114/id_rsa Username:docker}
I1011 21:11:06.495189  915027 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-807114 ssh pgrep buildkitd: exit status 1 (357.96444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image build -t localhost/my-image:functional-807114 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 image build -t localhost/my-image:functional-807114 testdata/build --alsologtostderr: (3.323983514s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-807114 image build -t localhost/my-image:functional-807114 testdata/build --alsologtostderr:
I1011 21:11:06.993998  915185 out.go:345] Setting OutFile to fd 1 ...
I1011 21:11:06.995850  915185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.995902  915185 out.go:358] Setting ErrFile to fd 2...
I1011 21:11:06.995924  915185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1011 21:11:06.996390  915185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
I1011 21:11:06.997697  915185 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.999034  915185 config.go:182] Loaded profile config "functional-807114": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1011 21:11:06.999902  915185 cli_runner.go:164] Run: docker container inspect functional-807114 --format={{.State.Status}}
I1011 21:11:07.020286  915185 ssh_runner.go:195] Run: systemctl --version
I1011 21:11:07.020416  915185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-807114
I1011 21:11:07.041848  915185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33888 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/functional-807114/id_rsa Username:docker}
I1011 21:11:07.137009  915185 build_images.go:161] Building image from path: /tmp/build.3296912530.tar
I1011 21:11:07.137093  915185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1011 21:11:07.151093  915185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3296912530.tar
I1011 21:11:07.154654  915185 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3296912530.tar: stat -c "%s %y" /var/lib/minikube/build/build.3296912530.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3296912530.tar': No such file or directory
I1011 21:11:07.154696  915185 ssh_runner.go:362] scp /tmp/build.3296912530.tar --> /var/lib/minikube/build/build.3296912530.tar (3072 bytes)
I1011 21:11:07.181200  915185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3296912530
I1011 21:11:07.190400  915185 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3296912530 -xf /var/lib/minikube/build/build.3296912530.tar
I1011 21:11:07.200129  915185 containerd.go:394] Building image: /var/lib/minikube/build/build.3296912530
I1011 21:11:07.200232  915185 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3296912530 --local dockerfile=/var/lib/minikube/build/build.3296912530 --output type=image,name=localhost/my-image:functional-807114
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:13a722e26de1cda09c6e62a706925a1ad837ee30a578e8457f4c9aa03507c83f
#8 exporting manifest sha256:13a722e26de1cda09c6e62a706925a1ad837ee30a578e8457f4c9aa03507c83f 0.0s done
#8 exporting config sha256:257a52390abc780426c3b0b2560e8035e4ff454e63600c352e9ff1f7bc8122d0 0.0s done
#8 naming to localhost/my-image:functional-807114 done
#8 DONE 0.2s
I1011 21:11:10.202899  915185 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3296912530 --local dockerfile=/var/lib/minikube/build/build.3296912530 --output type=image,name=localhost/my-image:functional-807114: (3.002633496s)
I1011 21:11:10.202980  915185 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3296912530
I1011 21:11:10.214229  915185 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3296912530.tar
I1011 21:11:10.224158  915185 build_images.go:217] Built localhost/my-image:functional-807114 from /tmp/build.3296912530.tar
I1011 21:11:10.224204  915185 build_images.go:133] succeeded building to: functional-807114
I1011 21:11:10.224210  915185 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-807114
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr: (1.503952191s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr: (1.053389576s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-807114
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-807114 image load --daemon kicbase/echo-server:functional-807114 --alsologtostderr: (1.038537268s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image save kicbase/echo-server:functional-807114 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image rm kicbase/echo-server:functional-807114 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-807114
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-807114 image save --daemon kicbase/echo-server:functional-807114 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-807114
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-807114
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-807114
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-807114
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-514929 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1011 21:11:17.878902  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:11:58.840655  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-514929 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m54.845583159s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (115.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- rollout status deployment/busybox
E1011 21:13:20.761972  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-514929 -- rollout status deployment/busybox: (30.015805501s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-7nfd9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-88xmb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-bq6cr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-7nfd9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-88xmb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-bq6cr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-7nfd9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-88xmb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-bq6cr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-7nfd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-7nfd9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-88xmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-88xmb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-bq6cr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-514929 -- exec busybox-7dff88458-bq6cr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-514929 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-514929 -v=7 --alsologtostderr: (26.51899662s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr: (1.003094466s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-514929 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.056446838s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp testdata/cp-test.txt ha-514929:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315933869/001/cp-test_ha-514929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929:/home/docker/cp-test.txt ha-514929-m02:/home/docker/cp-test_ha-514929_ha-514929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test_ha-514929_ha-514929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929:/home/docker/cp-test.txt ha-514929-m03:/home/docker/cp-test_ha-514929_ha-514929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test_ha-514929_ha-514929-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929:/home/docker/cp-test.txt ha-514929-m04:/home/docker/cp-test_ha-514929_ha-514929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test_ha-514929_ha-514929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp testdata/cp-test.txt ha-514929-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315933869/001/cp-test_ha-514929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m02:/home/docker/cp-test.txt ha-514929:/home/docker/cp-test_ha-514929-m02_ha-514929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test_ha-514929-m02_ha-514929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m02:/home/docker/cp-test.txt ha-514929-m03:/home/docker/cp-test_ha-514929-m02_ha-514929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test_ha-514929-m02_ha-514929-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m02:/home/docker/cp-test.txt ha-514929-m04:/home/docker/cp-test_ha-514929-m02_ha-514929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test_ha-514929-m02_ha-514929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp testdata/cp-test.txt ha-514929-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315933869/001/cp-test_ha-514929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m03:/home/docker/cp-test.txt ha-514929:/home/docker/cp-test_ha-514929-m03_ha-514929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test_ha-514929-m03_ha-514929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m03:/home/docker/cp-test.txt ha-514929-m02:/home/docker/cp-test_ha-514929-m03_ha-514929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test_ha-514929-m03_ha-514929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m03:/home/docker/cp-test.txt ha-514929-m04:/home/docker/cp-test_ha-514929-m03_ha-514929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test_ha-514929-m03_ha-514929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp testdata/cp-test.txt ha-514929-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3315933869/001/cp-test_ha-514929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m04:/home/docker/cp-test.txt ha-514929:/home/docker/cp-test_ha-514929-m04_ha-514929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929 "sudo cat /home/docker/cp-test_ha-514929-m04_ha-514929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m04:/home/docker/cp-test.txt ha-514929-m02:/home/docker/cp-test_ha-514929-m04_ha-514929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m02 "sudo cat /home/docker/cp-test_ha-514929-m04_ha-514929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 cp ha-514929-m04:/home/docker/cp-test.txt ha-514929-m03:/home/docker/cp-test_ha-514929-m04_ha-514929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 ssh -n ha-514929-m03 "sudo cat /home/docker/cp-test_ha-514929-m04_ha-514929-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 node stop m02 -v=7 --alsologtostderr: (12.168994656s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr: exit status 7 (792.260627ms)

                                                
                                                
-- stdout --
	ha-514929
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-514929-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514929-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-514929-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:14:43.962482  931371 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:14:43.962668  931371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:14:43.962701  931371 out.go:358] Setting ErrFile to fd 2...
	I1011 21:14:43.962723  931371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:14:43.963022  931371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:14:43.963237  931371 out.go:352] Setting JSON to false
	I1011 21:14:43.963324  931371 mustload.go:65] Loading cluster: ha-514929
	I1011 21:14:43.963376  931371 notify.go:220] Checking for updates...
	I1011 21:14:43.964727  931371 config.go:182] Loaded profile config "ha-514929": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:14:43.964773  931371 status.go:174] checking status of ha-514929 ...
	I1011 21:14:43.965492  931371 cli_runner.go:164] Run: docker container inspect ha-514929 --format={{.State.Status}}
	I1011 21:14:43.986482  931371 status.go:371] ha-514929 host status = "Running" (err=<nil>)
	I1011 21:14:43.986504  931371 host.go:66] Checking if "ha-514929" exists ...
	I1011 21:14:43.987028  931371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514929
	I1011 21:14:44.012114  931371 host.go:66] Checking if "ha-514929" exists ...
	I1011 21:14:44.012991  931371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:14:44.013044  931371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514929
	I1011 21:14:44.044830  931371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33893 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/ha-514929/id_rsa Username:docker}
	I1011 21:14:44.139829  931371 ssh_runner.go:195] Run: systemctl --version
	I1011 21:14:44.144151  931371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:14:44.156268  931371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:14:44.211798  931371 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-11 21:14:44.200467505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:14:44.212403  931371 kubeconfig.go:125] found "ha-514929" server: "https://192.168.49.254:8443"
	I1011 21:14:44.212426  931371 api_server.go:166] Checking apiserver status ...
	I1011 21:14:44.212471  931371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:14:44.224126  931371 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I1011 21:14:44.233414  931371 api_server.go:182] apiserver freezer: "7:freezer:/docker/971b2df8f092bb534f60f1be251de38817380cbfabef0ad73eb10101cf3249af/kubepods/burstable/pod21e1ef6bbd03a3fb3771a4e7cb61db0c/c838930390345a194161b50abed4f113bc86a46141f66ea132472638222b36f3"
	I1011 21:14:44.233501  931371 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/971b2df8f092bb534f60f1be251de38817380cbfabef0ad73eb10101cf3249af/kubepods/burstable/pod21e1ef6bbd03a3fb3771a4e7cb61db0c/c838930390345a194161b50abed4f113bc86a46141f66ea132472638222b36f3/freezer.state
	I1011 21:14:44.242180  931371 api_server.go:204] freezer state: "THAWED"
	I1011 21:14:44.242209  931371 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1011 21:14:44.250570  931371 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1011 21:14:44.250607  931371 status.go:463] ha-514929 apiserver status = Running (err=<nil>)
	I1011 21:14:44.250617  931371 status.go:176] ha-514929 status: &{Name:ha-514929 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:14:44.250634  931371 status.go:174] checking status of ha-514929-m02 ...
	I1011 21:14:44.250958  931371 cli_runner.go:164] Run: docker container inspect ha-514929-m02 --format={{.State.Status}}
	I1011 21:14:44.268698  931371 status.go:371] ha-514929-m02 host status = "Stopped" (err=<nil>)
	I1011 21:14:44.268720  931371 status.go:384] host is not running, skipping remaining checks
	I1011 21:14:44.268728  931371 status.go:176] ha-514929-m02 status: &{Name:ha-514929-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:14:44.268748  931371 status.go:174] checking status of ha-514929-m03 ...
	I1011 21:14:44.269059  931371 cli_runner.go:164] Run: docker container inspect ha-514929-m03 --format={{.State.Status}}
	I1011 21:14:44.286795  931371 status.go:371] ha-514929-m03 host status = "Running" (err=<nil>)
	I1011 21:14:44.286819  931371 host.go:66] Checking if "ha-514929-m03" exists ...
	I1011 21:14:44.287122  931371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514929-m03
	I1011 21:14:44.308987  931371 host.go:66] Checking if "ha-514929-m03" exists ...
	I1011 21:14:44.309426  931371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:14:44.309486  931371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514929-m03
	I1011 21:14:44.326552  931371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33903 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/ha-514929-m03/id_rsa Username:docker}
	I1011 21:14:44.418367  931371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:14:44.438201  931371 kubeconfig.go:125] found "ha-514929" server: "https://192.168.49.254:8443"
	I1011 21:14:44.438244  931371 api_server.go:166] Checking apiserver status ...
	I1011 21:14:44.438334  931371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:14:44.476980  931371 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1323/cgroup
	I1011 21:14:44.488282  931371 api_server.go:182] apiserver freezer: "7:freezer:/docker/095d27e64c6ff230d253d2d84f014082443d1260f3322322797392332ec418b3/kubepods/burstable/podc9524da78ac5ed9240cca640af37f5b0/0cdc8b7b79d1b1808d325f19f35532cc2668dd5072bdd5effc2a9bd1080ec7e1"
	I1011 21:14:44.488395  931371 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/095d27e64c6ff230d253d2d84f014082443d1260f3322322797392332ec418b3/kubepods/burstable/podc9524da78ac5ed9240cca640af37f5b0/0cdc8b7b79d1b1808d325f19f35532cc2668dd5072bdd5effc2a9bd1080ec7e1/freezer.state
	I1011 21:14:44.498708  931371 api_server.go:204] freezer state: "THAWED"
	I1011 21:14:44.498776  931371 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1011 21:14:44.506758  931371 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1011 21:14:44.506789  931371 status.go:463] ha-514929-m03 apiserver status = Running (err=<nil>)
	I1011 21:14:44.506812  931371 status.go:176] ha-514929-m03 status: &{Name:ha-514929-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:14:44.506832  931371 status.go:174] checking status of ha-514929-m04 ...
	I1011 21:14:44.507156  931371 cli_runner.go:164] Run: docker container inspect ha-514929-m04 --format={{.State.Status}}
	I1011 21:14:44.524129  931371 status.go:371] ha-514929-m04 host status = "Running" (err=<nil>)
	I1011 21:14:44.524153  931371 host.go:66] Checking if "ha-514929-m04" exists ...
	I1011 21:14:44.524463  931371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-514929-m04
	I1011 21:14:44.548103  931371 host.go:66] Checking if "ha-514929-m04" exists ...
	I1011 21:14:44.548415  931371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:14:44.548453  931371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-514929-m04
	I1011 21:14:44.567849  931371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33908 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/ha-514929-m04/id_rsa Username:docker}
	I1011 21:14:44.675572  931371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:14:44.688851  931371 status.go:176] ha-514929-m04 status: &{Name:ha-514929-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 node start m02 -v=7 --alsologtostderr: (17.18761887s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-514929 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-514929 -v=7 --alsologtostderr
E1011 21:15:19.495248  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.501624  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.514472  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.536767  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.578113  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.659471  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:19.821473  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:20.142733  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:20.784717  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:22.066246  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:24.627606  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:29.749904  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:36.899902  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:15:39.992210  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-514929 -v=7 --alsologtostderr: (37.463916336s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-514929 --wait=true -v=7 --alsologtostderr
E1011 21:16:00.473524  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:04.604228  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:16:41.435585  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-514929 --wait=true -v=7 --alsologtostderr: (1m32.984270466s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-514929
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 node delete m03 -v=7 --alsologtostderr: (9.549561301s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 stop -v=7 --alsologtostderr: (35.991842527s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr: exit status 7 (114.450828ms)

                                                
                                                
-- stdout --
	ha-514929
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514929-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-514929-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:18:02.815381  946261 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:18:02.816136  946261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:18:02.816153  946261 out.go:358] Setting ErrFile to fd 2...
	I1011 21:18:02.816158  946261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:18:02.816412  946261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:18:02.816596  946261 out.go:352] Setting JSON to false
	I1011 21:18:02.816642  946261 mustload.go:65] Loading cluster: ha-514929
	I1011 21:18:02.817082  946261 config.go:182] Loaded profile config "ha-514929": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:18:02.817103  946261 status.go:174] checking status of ha-514929 ...
	I1011 21:18:02.817637  946261 cli_runner.go:164] Run: docker container inspect ha-514929 --format={{.State.Status}}
	I1011 21:18:02.818143  946261 notify.go:220] Checking for updates...
	I1011 21:18:02.834352  946261 status.go:371] ha-514929 host status = "Stopped" (err=<nil>)
	I1011 21:18:02.834373  946261 status.go:384] host is not running, skipping remaining checks
	I1011 21:18:02.834380  946261 status.go:176] ha-514929 status: &{Name:ha-514929 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:18:02.834408  946261 status.go:174] checking status of ha-514929-m02 ...
	I1011 21:18:02.834733  946261 cli_runner.go:164] Run: docker container inspect ha-514929-m02 --format={{.State.Status}}
	I1011 21:18:02.850804  946261 status.go:371] ha-514929-m02 host status = "Stopped" (err=<nil>)
	I1011 21:18:02.850826  946261 status.go:384] host is not running, skipping remaining checks
	I1011 21:18:02.850834  946261 status.go:176] ha-514929-m02 status: &{Name:ha-514929-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:18:02.850868  946261 status.go:174] checking status of ha-514929-m04 ...
	I1011 21:18:02.851185  946261 cli_runner.go:164] Run: docker container inspect ha-514929-m04 --format={{.State.Status}}
	I1011 21:18:02.876155  946261 status.go:371] ha-514929-m04 host status = "Stopped" (err=<nil>)
	I1011 21:18:02.876175  946261 status.go:384] host is not running, skipping remaining checks
	I1011 21:18:02.876181  946261 status.go:176] ha-514929-m04 status: &{Name:ha-514929-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-514929 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1011 21:18:03.357052  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-514929 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.070640139s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-514929 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-514929 --control-plane -v=7 --alsologtostderr: (41.937867632s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-514929 status -v=7 --alsologtostderr: (1.002689252s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.011843752s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-331659 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1011 21:20:19.495609  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:36.901468  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:20:47.199049  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-331659 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m0.35797305s)
--- PASS: TestJSONOutput/start/Command (60.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-331659 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-331659 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-331659 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-331659 --output=json --user=testUser: (5.733338475s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-421019 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-421019 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.260266ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78ed9176-4eab-4f1c-8ef8-44ec7c2a0eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-421019] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"882be2aa-b79c-43fb-b011-6ec3eddc5f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"0160e5c5-500f-41d6-b428-37cb3ef318a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"311c2f24-1e41-47bf-b89a-49bc67aff021","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig"}}
	{"specversion":"1.0","id":"7d483551-fffd-4b76-9489-5ee2dd12bffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube"}}
	{"specversion":"1.0","id":"4cd58286-d406-44dd-839a-5bb1a91198ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8ca480c8-36a4-4300-8224-1c1579dc4b9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1af72143-9d4c-4334-ad50-e961d3c3eb83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-421019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-421019
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-932388 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-932388 --network=: (36.314242042s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-932388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-932388
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-932388: (2.054726943s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.39s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-262609 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-262609 --network=bridge: (29.537014188s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-262609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-262609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-262609: (1.988630201s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.55s)

                                                
                                    
x
+
TestKicExistingNetwork (32.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1011 21:22:34.831372  875861 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1011 21:22:34.844919  875861 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1011 21:22:34.844991  875861 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1011 21:22:34.845009  875861 cli_runner.go:164] Run: docker network inspect existing-network
W1011 21:22:34.859307  875861 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1011 21:22:34.859339  875861 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1011 21:22:34.859356  875861 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1011 21:22:34.859463  875861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1011 21:22:34.874642  875861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a84816c0b608 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ce:0f:5d:9f} reservation:<nil>}
I1011 21:22:34.880086  875861 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1011 21:22:34.880543  875861 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40013a6400}
I1011 21:22:34.880568  875861 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1011 21:22:34.880626  875861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1011 21:22:34.950053  875861 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-074193 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-074193 --network=existing-network: (30.608605173s)
helpers_test.go:175: Cleaning up "existing-network-074193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-074193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-074193: (2.042664733s)
I1011 21:23:07.616915  875861 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.80s)

                                                
                                    
x
+
TestKicCustomSubnet (35.31s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-695027 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-695027 --subnet=192.168.60.0/24: (33.099143779s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-695027 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-695027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-695027
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-695027: (2.183026034s)
--- PASS: TestKicCustomSubnet (35.31s)

                                                
                                    
x
+
TestKicStaticIP (32.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-763704 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-763704 --static-ip=192.168.200.200: (29.78897355s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-763704 ip
helpers_test.go:175: Cleaning up "static-ip-763704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-763704
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-763704: (2.160263024s)
--- PASS: TestKicStaticIP (32.10s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (68.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-101173 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-101173 --driver=docker  --container-runtime=containerd: (30.198609065s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-104685 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-104685 --driver=docker  --container-runtime=containerd: (32.305109555s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-101173
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-104685
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-104685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-104685
E1011 21:25:19.494962  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-104685: (1.976161747s)
helpers_test.go:175: Cleaning up "first-101173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-101173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-101173: (2.279953427s)
--- PASS: TestMinikubeProfile (68.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-583949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-583949 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.116480772s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-583949 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-585885 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-585885 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.424129059s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-585885 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-583949 --alsologtostderr -v=5
E1011 21:25:36.899900  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-583949 --alsologtostderr -v=5: (1.60517269s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-585885 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-585885
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-585885: (1.211768222s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-585885
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-585885: (6.893620552s)
--- PASS: TestMountStart/serial/RestartStopped (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-585885 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-256272 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-256272 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.648522458s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- rollout status deployment/busybox
E1011 21:26:59.966678  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-256272 -- rollout status deployment/busybox: (15.330056405s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-pgff5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-tlxw8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-pgff5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-tlxw8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-pgff5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-tlxw8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-pgff5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-pgff5 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-tlxw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-256272 -- exec busybox-7dff88458-tlxw8 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-256272 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-256272 -v 3 --alsologtostderr: (15.705892238s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-256272 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp testdata/cp-test.txt multinode-256272:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3474479578/001/cp-test_multinode-256272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272:/home/docker/cp-test.txt multinode-256272-m02:/home/docker/cp-test_multinode-256272_multinode-256272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test_multinode-256272_multinode-256272-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272:/home/docker/cp-test.txt multinode-256272-m03:/home/docker/cp-test_multinode-256272_multinode-256272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test_multinode-256272_multinode-256272-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp testdata/cp-test.txt multinode-256272-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3474479578/001/cp-test_multinode-256272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m02:/home/docker/cp-test.txt multinode-256272:/home/docker/cp-test_multinode-256272-m02_multinode-256272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test_multinode-256272-m02_multinode-256272.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m02:/home/docker/cp-test.txt multinode-256272-m03:/home/docker/cp-test_multinode-256272-m02_multinode-256272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test_multinode-256272-m02_multinode-256272-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp testdata/cp-test.txt multinode-256272-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3474479578/001/cp-test_multinode-256272-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m03:/home/docker/cp-test.txt multinode-256272:/home/docker/cp-test_multinode-256272-m03_multinode-256272.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272 "sudo cat /home/docker/cp-test_multinode-256272-m03_multinode-256272.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 cp multinode-256272-m03:/home/docker/cp-test.txt multinode-256272-m02:/home/docker/cp-test_multinode-256272-m03_multinode-256272-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 ssh -n multinode-256272-m02 "sudo cat /home/docker/cp-test_multinode-256272-m03_multinode-256272-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-256272 node stop m03: (1.222404299s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-256272 status: exit status 7 (524.225716ms)

                                                
                                                
-- stdout --
	multinode-256272
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-256272-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-256272-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr: exit status 7 (483.076127ms)

                                                
                                                
-- stdout --
	multinode-256272
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-256272-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-256272-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:27:46.384105  999740 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:27:46.384271  999740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:27:46.384279  999740 out.go:358] Setting ErrFile to fd 2...
	I1011 21:27:46.384284  999740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:27:46.384522  999740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:27:46.384701  999740 out.go:352] Setting JSON to false
	I1011 21:27:46.384738  999740 mustload.go:65] Loading cluster: multinode-256272
	I1011 21:27:46.384828  999740 notify.go:220] Checking for updates...
	I1011 21:27:46.385165  999740 config.go:182] Loaded profile config "multinode-256272": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:27:46.385179  999740 status.go:174] checking status of multinode-256272 ...
	I1011 21:27:46.386037  999740 cli_runner.go:164] Run: docker container inspect multinode-256272 --format={{.State.Status}}
	I1011 21:27:46.405241  999740 status.go:371] multinode-256272 host status = "Running" (err=<nil>)
	I1011 21:27:46.405267  999740 host.go:66] Checking if "multinode-256272" exists ...
	I1011 21:27:46.405582  999740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-256272
	I1011 21:27:46.424330  999740 host.go:66] Checking if "multinode-256272" exists ...
	I1011 21:27:46.424682  999740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:27:46.424736  999740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-256272
	I1011 21:27:46.442254  999740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34015 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/multinode-256272/id_rsa Username:docker}
	I1011 21:27:46.531624  999740 ssh_runner.go:195] Run: systemctl --version
	I1011 21:27:46.536503  999740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:27:46.548785  999740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:27:46.600389  999740 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-11 21:27:46.590177765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:27:46.600972  999740 kubeconfig.go:125] found "multinode-256272" server: "https://192.168.58.2:8443"
	I1011 21:27:46.601005  999740 api_server.go:166] Checking apiserver status ...
	I1011 21:27:46.601051  999740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1011 21:27:46.612103  999740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1447/cgroup
	I1011 21:27:46.621654  999740 api_server.go:182] apiserver freezer: "7:freezer:/docker/f0fbbdd411627f2285f6a2142c6579c541c1789005456eaa4d63990df89299a4/kubepods/burstable/podc175bc235a0c63c07e0afd1aee29bbee/3a2cec46884d8ef5fb284dc7572a41df41034eafcd4a8c55d789f3459745340f"
	I1011 21:27:46.621731  999740 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f0fbbdd411627f2285f6a2142c6579c541c1789005456eaa4d63990df89299a4/kubepods/burstable/podc175bc235a0c63c07e0afd1aee29bbee/3a2cec46884d8ef5fb284dc7572a41df41034eafcd4a8c55d789f3459745340f/freezer.state
	I1011 21:27:46.630684  999740 api_server.go:204] freezer state: "THAWED"
	I1011 21:27:46.630715  999740 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1011 21:27:46.638371  999740 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1011 21:27:46.638398  999740 status.go:463] multinode-256272 apiserver status = Running (err=<nil>)
	I1011 21:27:46.638409  999740 status.go:176] multinode-256272 status: &{Name:multinode-256272 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:27:46.638425  999740 status.go:174] checking status of multinode-256272-m02 ...
	I1011 21:27:46.638726  999740 cli_runner.go:164] Run: docker container inspect multinode-256272-m02 --format={{.State.Status}}
	I1011 21:27:46.655017  999740 status.go:371] multinode-256272-m02 host status = "Running" (err=<nil>)
	I1011 21:27:46.655048  999740 host.go:66] Checking if "multinode-256272-m02" exists ...
	I1011 21:27:46.655431  999740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-256272-m02
	I1011 21:27:46.671336  999740 host.go:66] Checking if "multinode-256272-m02" exists ...
	I1011 21:27:46.671654  999740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1011 21:27:46.671708  999740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-256272-m02
	I1011 21:27:46.690818  999740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34020 SSHKeyPath:/home/jenkins/minikube-integration/19749-870468/.minikube/machines/multinode-256272-m02/id_rsa Username:docker}
	I1011 21:27:46.779600  999740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1011 21:27:46.791454  999740 status.go:176] multinode-256272-m02 status: &{Name:multinode-256272-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:27:46.791491  999740 status.go:174] checking status of multinode-256272-m03 ...
	I1011 21:27:46.791819  999740 cli_runner.go:164] Run: docker container inspect multinode-256272-m03 --format={{.State.Status}}
	I1011 21:27:46.808052  999740 status.go:371] multinode-256272-m03 host status = "Stopped" (err=<nil>)
	I1011 21:27:46.808076  999740 status.go:384] host is not running, skipping remaining checks
	I1011 21:27:46.808089  999740 status.go:176] multinode-256272-m03 status: &{Name:multinode-256272-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-256272 node start m03 -v=7 --alsologtostderr: (9.245326649s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-256272
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-256272
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-256272: (25.003834488s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-256272 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-256272 --wait=true -v=8 --alsologtostderr: (1m12.899119974s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-256272
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-256272 node delete m03: (4.825225566s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-256272 stop: (23.928232347s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-256272 status: exit status 7 (97.484907ms)

                                                
                                                
-- stdout --
	multinode-256272
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-256272-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr: exit status 7 (103.625789ms)

                                                
                                                
-- stdout --
	multinode-256272
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-256272-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:30:04.458052 1008169 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:30:04.458250 1008169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:30:04.458299 1008169 out.go:358] Setting ErrFile to fd 2...
	I1011 21:30:04.458322 1008169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:30:04.458582 1008169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:30:04.458793 1008169 out.go:352] Setting JSON to false
	I1011 21:30:04.458879 1008169 mustload.go:65] Loading cluster: multinode-256272
	I1011 21:30:04.458950 1008169 notify.go:220] Checking for updates...
	I1011 21:30:04.459366 1008169 config.go:182] Loaded profile config "multinode-256272": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:30:04.459418 1008169 status.go:174] checking status of multinode-256272 ...
	I1011 21:30:04.460328 1008169 cli_runner.go:164] Run: docker container inspect multinode-256272 --format={{.State.Status}}
	I1011 21:30:04.478235 1008169 status.go:371] multinode-256272 host status = "Stopped" (err=<nil>)
	I1011 21:30:04.478257 1008169 status.go:384] host is not running, skipping remaining checks
	I1011 21:30:04.480692 1008169 status.go:176] multinode-256272 status: &{Name:multinode-256272 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1011 21:30:04.480755 1008169 status.go:174] checking status of multinode-256272-m02 ...
	I1011 21:30:04.481073 1008169 cli_runner.go:164] Run: docker container inspect multinode-256272-m02 --format={{.State.Status}}
	I1011 21:30:04.509097 1008169 status.go:371] multinode-256272-m02 host status = "Stopped" (err=<nil>)
	I1011 21:30:04.509117 1008169 status.go:384] host is not running, skipping remaining checks
	I1011 21:30:04.509125 1008169 status.go:176] multinode-256272-m02 status: &{Name:multinode-256272-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-256272 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1011 21:30:19.495681  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:30:36.899939  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-256272 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.903529596s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-256272 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-256272
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-256272-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-256272-m02 --driver=docker  --container-runtime=containerd: exit status 14 (96.975711ms)

                                                
                                                
-- stdout --
	* [multinode-256272-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-256272-m02' is duplicated with machine name 'multinode-256272-m02' in profile 'multinode-256272'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-256272-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-256272-m03 --driver=docker  --container-runtime=containerd: (30.625568774s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-256272
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-256272: exit status 80 (326.003341ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-256272 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-256272-m03 already exists in multinode-256272-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-256272-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-256272-m03: (2.024558488s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.13s)

                                                
                                    
x
+
TestPreload (108.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1011 21:31:42.560877  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-083726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.731640202s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-083726 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-083726 image pull gcr.io/k8s-minikube/busybox: (2.100697505s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-083726
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-083726: (12.108066237s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-083726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-083726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.710146166s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-083726 image list
helpers_test.go:175: Cleaning up "test-preload-083726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-083726
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-083726: (2.443789762s)
--- PASS: TestPreload (108.44s)

                                                
                                    
x
+
TestScheduledStopUnix (110.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-587499 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-587499 --memory=2048 --driver=docker  --container-runtime=containerd: (33.910643312s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-587499 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-587499 -n scheduled-stop-587499
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-587499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1011 21:33:57.045155  875861 retry.go:31] will retry after 114.929µs: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.045360  875861 retry.go:31] will retry after 159.509µs: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.046526  875861 retry.go:31] will retry after 191.706µs: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.047727  875861 retry.go:31] will retry after 224.295µs: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.049041  875861 retry.go:31] will retry after 591.337µs: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.050361  875861 retry.go:31] will retry after 1.122808ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.052578  875861 retry.go:31] will retry after 1.664111ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.054795  875861 retry.go:31] will retry after 1.808394ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.057036  875861 retry.go:31] will retry after 3.104656ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.060276  875861 retry.go:31] will retry after 3.285154ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.064520  875861 retry.go:31] will retry after 3.018477ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.067689  875861 retry.go:31] will retry after 11.652583ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.079931  875861 retry.go:31] will retry after 17.308409ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.098219  875861 retry.go:31] will retry after 9.819559ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.108435  875861 retry.go:31] will retry after 18.148881ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.127398  875861 retry.go:31] will retry after 34.190708ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
I1011 21:33:57.162630  875861 retry.go:31] will retry after 77.181621ms: open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/scheduled-stop-587499/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-587499 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-587499 -n scheduled-stop-587499
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-587499
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-587499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-587499
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-587499: exit status 7 (75.719603ms)

                                                
                                                
-- stdout --
	scheduled-stop-587499
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-587499 -n scheduled-stop-587499
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-587499 -n scheduled-stop-587499: exit status 7 (72.430158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-587499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-587499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-587499: (5.192047855s)
--- PASS: TestScheduledStopUnix (110.71s)

                                                
                                    
x
+
TestInsufficientStorage (9.93s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-334376 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1011 21:35:19.494837  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-334376 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.488411157s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d49602b-784e-4817-88bc-5a44d38a946d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-334376] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4946b47c-a20c-46bb-8a70-e6c749a1f5be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19749"}}
	{"specversion":"1.0","id":"b197d277-9a41-4b1d-81a2-3dac3b7067dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a5783165-e0ec-47ea-9b60-758815eee33a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig"}}
	{"specversion":"1.0","id":"255f003a-0036-49f2-87f4-61d3e5c23d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube"}}
	{"specversion":"1.0","id":"2332937d-b1a9-40af-9a7e-3b9594e461fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ebf28093-feab-4bba-98b1-c40feabdaa46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5b6f0a16-a9e5-4a52-adf4-24db459ebaaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6c8e0714-9f41-45da-b900-967bce16032b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f009a392-d9f5-4266-8613-64471e744ced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9d258aa-f34f-41cb-bdae-fccd1acdca42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7baa3224-77a9-462d-b8ee-2692c9f5aa7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-334376\" primary control-plane node in \"insufficient-storage-334376\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b4ab5cd-71d1-440a-a332-5a564de88cbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f57e13ab-636e-4839-940f-16b60f2126b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9afaf03-cc3a-475b-a8bd-33122e61f0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-334376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-334376 --output=json --layout=cluster: exit status 7 (283.593356ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-334376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-334376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 21:35:21.085167 1026813 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-334376" does not appear in /home/jenkins/minikube-integration/19749-870468/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-334376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-334376 --output=json --layout=cluster: exit status 7 (284.915343ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-334376","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-334376","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1011 21:35:21.372361 1026875 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-334376" does not appear in /home/jenkins/minikube-integration/19749-870468/kubeconfig
	E1011 21:35:21.382411 1026875 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/insufficient-storage-334376/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-334376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-334376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-334376: (1.871547139s)
--- PASS: TestInsufficientStorage (9.93s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.78183953 start -p running-upgrade-010850 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1011 21:40:36.899844  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.78183953 start -p running-upgrade-010850 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.987677132s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-010850 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-010850 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.780680949s)
helpers_test.go:175: Cleaning up "running-upgrade-010850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-010850
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-010850: (2.695907472s)
--- PASS: TestRunningBinaryUpgrade (85.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.537904693s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-266974
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-266974: (1.227204626s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-266974 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-266974 status --format={{.Host}}: exit status 7 (69.165777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m37.321253607s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-266974 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (127.098322ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-266974] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-266974
	    minikube start -p kubernetes-upgrade-266974 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2669742 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-266974 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-266974 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.817595278s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-266974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-266974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-266974: (2.305719245s)
--- PASS: TestKubernetesUpgrade (349.58s)

                                                
                                    
x
+
TestMissingContainerUpgrade (191.6s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2215653267 start -p missing-upgrade-509597 --memory=2200 --driver=docker  --container-runtime=containerd
E1011 21:35:36.900296  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2215653267 start -p missing-upgrade-509597 --memory=2200 --driver=docker  --container-runtime=containerd: (1m42.567653463s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-509597
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-509597: (10.337542389s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-509597
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-509597 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-509597 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m14.224736447s)
helpers_test.go:175: Cleaning up "missing-upgrade-509597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-509597
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-509597: (2.287010057s)
--- PASS: TestMissingContainerUpgrade (191.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (88.291577ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-553043] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553043 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553043 --driver=docker  --container-runtime=containerd: (40.028666255s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-553043 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.729839451s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-553043 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-553043 status -o json: exit status 2 (304.786996ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-553043","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-553043
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-553043: (1.921821371s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553043 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.199418825s)
--- PASS: TestNoKubernetes/serial/Start (6.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-553043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-553043 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.894265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (1.356637243s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-553043
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-553043: (1.93543476s)
--- PASS: TestNoKubernetes/serial/Stop (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553043 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553043 --driver=docker  --container-runtime=containerd: (7.762421828s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-553043 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-553043 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.679245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1732261156 start -p stopped-upgrade-778617 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1732261156 start -p stopped-upgrade-778617 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.762490224s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1732261156 -p stopped-upgrade-778617 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1732261156 -p stopped-upgrade-778617 stop: (20.023959408s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-778617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1011 21:40:19.495581  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-778617 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.024464187s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (111.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-778617
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-778617: (1.10918433s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestPause/serial/Start (70.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-473595 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-473595 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m10.528810455s)
--- PASS: TestPause/serial/Start (70.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-473595 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-473595 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.813998252s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-236942 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-236942 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (262.371951ms)

                                                
                                                
-- stdout --
	* [false-236942] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19749
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1011 21:43:06.753397 1066429 out.go:345] Setting OutFile to fd 1 ...
	I1011 21:43:06.753565 1066429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:43:06.753579 1066429 out.go:358] Setting ErrFile to fd 2...
	I1011 21:43:06.753585 1066429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1011 21:43:06.753886 1066429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19749-870468/.minikube/bin
	I1011 21:43:06.754361 1066429 out.go:352] Setting JSON to false
	I1011 21:43:06.755464 1066429 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":19534,"bootTime":1728663453,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1011 21:43:06.755542 1066429 start.go:139] virtualization:  
	I1011 21:43:06.758052 1066429 out.go:177] * [false-236942] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1011 21:43:06.761745 1066429 notify.go:220] Checking for updates...
	I1011 21:43:06.763960 1066429 out.go:177]   - MINIKUBE_LOCATION=19749
	I1011 21:43:06.769764 1066429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1011 21:43:06.772089 1066429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19749-870468/kubeconfig
	I1011 21:43:06.774453 1066429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19749-870468/.minikube
	I1011 21:43:06.776594 1066429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1011 21:43:06.778771 1066429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1011 21:43:06.781281 1066429 config.go:182] Loaded profile config "pause-473595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1011 21:43:06.781367 1066429 driver.go:394] Setting default libvirt URI to qemu:///system
	I1011 21:43:06.820022 1066429 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1011 21:43:06.820180 1066429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1011 21:43:06.925660 1066429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-11 21:43:06.911969677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1011 21:43:06.925776 1066429 docker.go:318] overlay module found
	I1011 21:43:06.928132 1066429 out.go:177] * Using the docker driver based on user configuration
	I1011 21:43:06.930616 1066429 start.go:297] selected driver: docker
	I1011 21:43:06.930639 1066429 start.go:901] validating driver "docker" against <nil>
	I1011 21:43:06.930654 1066429 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1011 21:43:06.933689 1066429 out.go:201] 
	W1011 21:43:06.935518 1066429 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1011 21:43:06.937423 1066429 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-236942 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-473595
contexts:
- context:
cluster: pause-473595
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-473595
name: pause-473595
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-473595
user:
client-certificate: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.crt
client-key: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-236942

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-236942"

                                                
                                                
----------------------- debugLogs end: false-236942 [took: 4.647258482s] --------------------------------
helpers_test.go:175: Cleaning up "false-236942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-236942
--- PASS: TestNetworkPlugins/group/false (5.14s)

                                                
                                    
x
+
TestPause/serial/Pause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-473595 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-473595 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-473595 --output=json --layout=cluster: exit status 2 (412.937788ms)

                                                
                                                
-- stdout --
	{"Name":"pause-473595","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-473595","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-473595 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-473595 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-473595 --alsologtostderr -v=5: (1.22172511s)
--- PASS: TestPause/serial/PauseAgain (1.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.38s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-473595 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-473595 --alsologtostderr -v=5: (3.379954812s)
--- PASS: TestPause/serial/DeletePaused (3.38s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-473595
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-473595: exit status 1 (19.699538ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-473595: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-310298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1011 21:45:19.495638  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:45:36.900229  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-310298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m33.45268851s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-359490 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-359490 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m15.543975759s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-310298 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cfc972b-88db-4574-bd9c-42ac02bfe5b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cfc972b-88db-4574-bd9c-42ac02bfe5b1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005269232s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-310298 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-310298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-310298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.205638338s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-310298 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-310298 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-310298 --alsologtostderr -v=3: (12.348430226s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-310298 -n old-k8s-version-310298
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-310298 -n old-k8s-version-310298: exit status 7 (96.580504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-310298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-359490 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bab18282-3ec0-43fd-bbd5-abe0b81d8218] Pending
helpers_test.go:344: "busybox" [bab18282-3ec0-43fd-bbd5-abe0b81d8218] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bab18282-3ec0-43fd-bbd5-abe0b81d8218] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004790998s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-359490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-359490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-359490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024071949s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-359490 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-359490 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-359490 --alsologtostderr -v=3: (12.093974811s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-359490 -n no-preload-359490
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-359490 -n no-preload-359490: exit status 7 (75.914258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-359490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-359490 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1011 21:50:19.495700  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:50:36.900358  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-359490 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.225436432s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-359490 -n no-preload-359490
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h9r5j" [483ba0ea-db1b-41a1-ab1b-3644879356eb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004773253s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-h9r5j" [483ba0ea-db1b-41a1-ab1b-3644879356eb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004951096s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-359490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-359490 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-359490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-359490 -n no-preload-359490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-359490 -n no-preload-359490: exit status 2 (363.675068ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-359490 -n no-preload-359490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-359490 -n no-preload-359490: exit status 2 (328.492034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-359490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-359490 -n no-preload-359490
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-359490 -n no-preload-359490
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-159135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-159135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m3.34860985s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-95mv9" [8b76fe3c-2da8-4ce0-9d31-016f78d2113c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004251497s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-95mv9" [8b76fe3c-2da8-4ce0-9d31-016f78d2113c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016862213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-310298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-310298 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-310298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-310298 -n old-k8s-version-310298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-310298 -n old-k8s-version-310298: exit status 2 (320.753578ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-310298 -n old-k8s-version-310298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-310298 -n old-k8s-version-310298: exit status 2 (308.353104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-310298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-310298 -n old-k8s-version-310298
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-310298 -n old-k8s-version-310298
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-782993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-782993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (48.321428503s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-159135 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b541f383-486d-4cb7-806f-e3394fc62de6] Pending
helpers_test.go:344: "busybox" [b541f383-486d-4cb7-806f-e3394fc62de6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b541f383-486d-4cb7-806f-e3394fc62de6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004000834s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-159135 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-159135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-159135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.171444301s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-159135 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-159135 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-159135 --alsologtostderr -v=3: (12.27984879s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-159135 -n embed-certs-159135
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-159135 -n embed-certs-159135: exit status 7 (89.341731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-159135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-159135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-159135 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.181810414s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-159135 -n embed-certs-159135
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-782993 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe9f57f3-182e-40f3-86af-875457eadab8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe9f57f3-182e-40f3-86af-875457eadab8] Running
E1011 21:55:19.494923  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004518138s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-782993 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-782993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-782993 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.258921904s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-782993 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-782993 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-782993 --alsologtostderr -v=3: (12.729654442s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993: exit status 7 (76.800368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-782993 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-782993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1011 21:55:36.899613  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.676339  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.682668  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.694029  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.715556  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.756862  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:12.838311  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:13.000334  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:13.322786  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:13.964170  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:15.245678  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:17.808495  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:22.930701  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:33.172509  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:57:53.654735  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.325336  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.331783  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.343225  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.364679  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.406054  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.487571  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.649098  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:26.970809  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:27.612890  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:28.894406  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:31.456112  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:34.616208  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:36.577828  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:58:46.819547  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:59:07.301329  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-782993 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.677718879s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wdt25" [31b378ec-b812-4ca1-990a-0d511686ff88] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003416017s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wdt25" [31b378ec-b812-4ca1-990a-0d511686ff88] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008115143s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-159135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-159135 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-159135 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-159135 -n embed-certs-159135
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-159135 -n embed-certs-159135: exit status 2 (319.904147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-159135 -n embed-certs-159135
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-159135 -n embed-certs-159135: exit status 2 (339.466264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-159135 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-159135 -n embed-certs-159135
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-159135 -n embed-certs-159135
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-814660 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1011 21:59:48.263475  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
E1011 21:59:56.537982  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-814660 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (38.273695284s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q4x27" [838e0486-8cff-4143-9c2d-0f9444358cf4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003956456s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q4x27" [838e0486-8cff-4143-9c2d-0f9444358cf4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003991642s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-782993 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-782993 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-782993 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993: exit status 2 (385.433577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993: exit status 2 (365.826659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-782993 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-782993 -n default-k8s-diff-port-782993
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.888997501s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-814660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-814660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.637562438s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-814660 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-814660 --alsologtostderr -v=3: (3.151201665s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814660 -n newest-cni-814660
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814660 -n newest-cni-814660: exit status 7 (117.959345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-814660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-814660 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1011 22:00:36.900227  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/addons-652898/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-814660 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (24.707767078s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814660 -n newest-cni-814660
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-814660 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-814660 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-814660 --alsologtostderr -v=1: (1.261508178s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814660 -n newest-cni-814660
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814660 -n newest-cni-814660: exit status 2 (421.455888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814660 -n newest-cni-814660
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814660 -n newest-cni-814660: exit status 2 (405.416951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-814660 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814660 -n newest-cni-814660
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814660 -n newest-cni-814660
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.10s)
E1011 22:06:23.127065  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.133593  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.145152  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.166943  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.208928  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.290511  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.451964  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:23.773718  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:24.415112  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:25.696576  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:28.258117  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:32.017675  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:33.379836  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:06:43.622119  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1011 22:01:10.185366  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m2.377275801s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-236942 "pgrep -a kubelet"
I1011 22:01:22.731511  875861 config.go:182] Loaded profile config "auto-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-236942 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bwq7l" [4c9e6a7c-bb6d-4a25-964e-83cb04ccb27d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bwq7l" [4c9e6a7c-bb6d-4a25-964e-83cb04ccb27d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004540534s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.485092547s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6f9rc" [9422b272-79f2-45dc-95d3-789753dae0a6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006322204s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-236942 "pgrep -a kubelet"
I1011 22:02:10.459480  875861 config.go:182] Loaded profile config "kindnet-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-236942 replace --force -f testdata/netcat-deployment.yaml
I1011 22:02:10.811560  875861 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nnc5g" [9fa05102-2493-4917-93f0-1f29de474a20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:02:12.677133  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/old-k8s-version-310298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nnc5g" [9fa05102-2493-4917-93f0-1f29de474a20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003907196s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.182738602s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gxv75" [8ee78156-d5de-4333-9bc6-367b5c230443] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.011105842s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-236942 "pgrep -a kubelet"
I1011 22:03:07.589958  875861 config.go:182] Loaded profile config "calico-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-236942 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-twgdj" [dd7e3f29-3e30-46ea-add8-7906ec105a6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-twgdj" [dd7e3f29-3e30-46ea-add8-7906ec105a6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.00391727s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m28.449467829s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-236942 "pgrep -a kubelet"
I1011 22:03:44.326896  875861 config.go:182] Loaded profile config "custom-flannel-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-236942 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xmsn7" [a703553e-0355-48c2-b007-f66ed703c6bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xmsn7" [a703553e-0355-48c2-b007-f66ed703c6bd] Running
E1011 22:03:54.027589  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/no-preload-359490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00546425s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1011 22:05:02.564709  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.075665  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.082102  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.093570  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.115004  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.156411  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:10.238504  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.580740083s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-236942 "pgrep -a kubelet"
E1011 22:05:10.403294  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
I1011 22:05:10.562641  875861 config.go:182] Loaded profile config "enable-default-cni-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-236942 replace --force -f testdata/netcat-deployment.yaml
E1011 22:05:10.725390  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xztvr" [180defdf-aa95-4e2c-84ee-cb3eac788ad0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:05:11.367615  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:12.649777  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-xztvr" [180defdf-aa95-4e2c-84ee-cb3eac788ad0] Running
E1011 22:05:15.211100  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:05:19.495260  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/functional-807114/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00444071s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-z68j5" [2f6c1305-ca69-446c-ac0c-0465677cd28d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004434555s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-236942 "pgrep -a kubelet"
I1011 22:05:19.848793  875861 config.go:182] Loaded profile config "flannel-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-236942 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m7wcj" [a4df2f60-633b-482e-a042-a1b66ca3e2d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1011 22:05:20.333010  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-m7wcj" [a4df2f60-633b-482e-a042-a1b66ca3e2d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004418843s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-236942 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1011 22:05:51.056093  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/default-k8s-diff-port-782993/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-236942 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m11.442402322s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-236942 "pgrep -a kubelet"
I1011 22:06:54.292208  875861 config.go:182] Loaded profile config "bridge-236942": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-236942 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kv7nn" [a77554e4-a468-47f5-9365-06f62b0219f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kv7nn" [a77554e4-a468-47f5-9365-06f62b0219f2] Running
E1011 22:07:04.104312  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/auto-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.110887  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.117389  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.128912  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.150420  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.191910  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.273393  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.434908  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:04.756966  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
E1011 22:07:05.399186  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003371426s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-236942 exec deployment/netcat -- nslookup kubernetes.default
E1011 22:07:06.681509  875861 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/kindnet-236942/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-236942 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-547143 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-547143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-547143
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-225207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-225207
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-236942 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-473595
contexts:
- context:
cluster: pause-473595
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:42:38 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-473595
name: pause-473595
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-473595
user:
client-certificate: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.crt
client-key: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-236942

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-236942"

                                                
                                                
----------------------- debugLogs end: kubenet-236942 [took: 3.729512076s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-236942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-236942
--- SKIP: TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-236942 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-236942" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19749-870468/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:43:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-473595
contexts:
- context:
cluster: pause-473595
extensions:
- extension:
last-update: Fri, 11 Oct 2024 21:43:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-473595
name: pause-473595
current-context: pause-473595
kind: Config
preferences: {}
users:
- name: pause-473595
user:
client-certificate: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.crt
client-key: /home/jenkins/minikube-integration/19749-870468/.minikube/profiles/pause-473595/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-236942

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-236942" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-236942"

                                                
                                                
----------------------- debugLogs end: cilium-236942 [took: 5.016854841s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-236942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-236942
--- SKIP: TestNetworkPlugins/group/cilium (5.24s)

                                                
                                    
Copied to clipboard