Test Report: Docker_Linux_containerd_arm64 19780

                    
                      d63f64bffc284d34b6c2581e44dece8bfcca0b7a:2024-10-09:36574
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 211.33
302 TestStartStop/group/old-k8s-version/serial/SecondStart 382.83
x
+
TestAddons/serial/Volcano (211.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:811: volcano-admission stabilized in 58.110913ms
addons_test.go:803: volcano-scheduler stabilized in 58.370009ms
addons_test.go:819: volcano-controller stabilized in 58.440211ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-9j66b" [1e2ef4ea-af32-433c-bcc7-273d3cded361] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004266574s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-82wcx" [079c66af-66fe-47d2-bbce-450353e22055] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004159992s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-7gf79" [0d059aa9-4c73-46ed-b0d4-d6be5dfbf2c8] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004108294s
addons_test.go:838: (dbg) Run:  kubectl --context addons-514774 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-514774 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-514774 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [be174162-1b80-4a84-b9cb-ad91e5af086e] Pending
helpers_test.go:344: "test-job-nginx-0" [be174162-1b80-4a84-b9cb-ad91e5af086e] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-514774 -n addons-514774
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-09 18:53:26.884919389 +0000 UTC m=+427.081303443
addons_test.go:870: (dbg) Run:  kubectl --context addons-514774 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-514774 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-1e15b863-4321-4f39-a7f1-db1b844031fe
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jqhmm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-jqhmm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-514774 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-514774 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-514774
helpers_test.go:235: (dbg) docker inspect addons-514774:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1",
	        "Created": "2024-10-09T18:46:59.363030268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8860,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T18:46:59.536933114Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1/hosts",
	        "LogPath": "/var/lib/docker/containers/2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1/2bde813c5e71f516e229cd49724ae2d74de1cc71f563ecc34dede23798738ed1-json.log",
	        "Name": "/addons-514774",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-514774:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-514774",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9aa1d8fb1f52e5ae60cdfee028a97e7857c7c0da16772c797a6b15722209b3d8-init/diff:/var/lib/docker/overlay2/b874d444a15868350f8fd5f52e8f0ed756efd8ce6e723f3b60197aecd7f71b6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9aa1d8fb1f52e5ae60cdfee028a97e7857c7c0da16772c797a6b15722209b3d8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9aa1d8fb1f52e5ae60cdfee028a97e7857c7c0da16772c797a6b15722209b3d8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9aa1d8fb1f52e5ae60cdfee028a97e7857c7c0da16772c797a6b15722209b3d8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-514774",
	                "Source": "/var/lib/docker/volumes/addons-514774/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-514774",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-514774",
	                "name.minikube.sigs.k8s.io": "addons-514774",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0662cb49d5d62c1e256df83e372e3a5b06ed7a1a13d1789b25442a7f121fc090",
	            "SandboxKey": "/var/run/docker/netns/0662cb49d5d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-514774": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1d14568aa75d84e88d53dad7eb3196862d8345e55d59db4660ea82e2f77d5f53",
	                    "EndpointID": "9fa21ef61d9a4d9206d41535923c3f0149431f631754ebc094f68ea1ed0c03be",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-514774",
	                        "2bde813c5e71"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-514774 -n addons-514774
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 logs -n 25: (1.589027092s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-217697   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-217697              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-217697              | download-only-217697   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | -o=json --download-only              | download-only-003905   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-003905              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-003905              | download-only-003905   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-217697              | download-only-217697   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-003905              | download-only-003905   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                   | download-docker-334728 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | download-docker-334728               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-334728            | download-docker-334728 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                   | binary-mirror-587388   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | binary-mirror-587388                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34277               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-587388              | binary-mirror-587388   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| addons  | enable dashboard -p                  | addons-514774          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-514774                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-514774          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-514774                        |                        |         |         |                     |                     |
	| start   | -p addons-514774 --wait=true         | addons-514774          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:50 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:35.017750    8358 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:35.017894    8358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:35.017920    8358 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:35.017939    8358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:35.018217    8358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 18:46:35.018710    8358 out.go:352] Setting JSON to false
	I1009 18:46:35.019519    8358 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1738,"bootTime":1728497857,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 18:46:35.019598    8358 start.go:139] virtualization:  
	I1009 18:46:35.021691    8358 out.go:177] * [addons-514774] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:46:35.023240    8358 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:46:35.023298    8358 notify.go:220] Checking for updates...
	I1009 18:46:35.025628    8358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:35.027172    8358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:46:35.028305    8358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 18:46:35.029832    8358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:46:35.031252    8358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:46:35.033005    8358 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:35.055648    8358 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:35.055779    8358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:35.122054    8358 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:35.112215399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:35.122163    8358 docker.go:318] overlay module found
	I1009 18:46:35.123600    8358 out.go:177] * Using the docker driver based on user configuration
	I1009 18:46:35.125132    8358 start.go:297] selected driver: docker
	I1009 18:46:35.125151    8358 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:35.125164    8358 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:46:35.125801    8358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:35.181103    8358 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-09 18:46:35.171197202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:35.181310    8358 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:35.181536    8358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:46:35.182784    8358 out.go:177] * Using Docker driver with root privileges
	I1009 18:46:35.183897    8358 cni.go:84] Creating CNI manager for ""
	I1009 18:46:35.183956    8358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:46:35.183968    8358 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:35.184039    8358 start.go:340] cluster config:
	{Name:addons-514774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-514774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:35.185611    8358 out.go:177] * Starting "addons-514774" primary control-plane node in "addons-514774" cluster
	I1009 18:46:35.186873    8358 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1009 18:46:35.188137    8358 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:35.189649    8358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 18:46:35.189705    8358 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1009 18:46:35.189732    8358 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:35.189817    8358 preload.go:172] Found /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 18:46:35.189834    8358 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1009 18:46:35.190179    8358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/config.json ...
	I1009 18:46:35.190206    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/config.json: {Name:mk59eff79d957673e0864b4532cfe18feb8e5fc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:35.190366    8358 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:35.205219    8358 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:35.205344    8358 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:35.205367    8358 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:35.205388    8358 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:35.205398    8358 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:35.205404    8358 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1009 18:46:52.290971    8358 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1009 18:46:52.291012    8358 cache.go:194] Successfully downloaded all kic artifacts
	I1009 18:46:52.291042    8358 start.go:360] acquireMachinesLock for addons-514774: {Name:mkdcf1005b9430f33be78576ed4ac9ee10d48cce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:52.291167    8358 start.go:364] duration metric: took 103.793µs to acquireMachinesLock for "addons-514774"
	I1009 18:46:52.291196    8358 start.go:93] Provisioning new machine with config: &{Name:addons-514774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-514774 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 18:46:52.291291    8358 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:46:52.293309    8358 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 18:46:52.293603    8358 start.go:159] libmachine.API.Create for "addons-514774" (driver="docker")
	I1009 18:46:52.293632    8358 client.go:168] LocalClient.Create starting
	I1009 18:46:52.293755    8358 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem
	I1009 18:46:52.552551    8358 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem
	I1009 18:46:52.833837    8358 cli_runner.go:164] Run: docker network inspect addons-514774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:46:52.850035    8358 cli_runner.go:211] docker network inspect addons-514774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:46:52.850122    8358 network_create.go:284] running [docker network inspect addons-514774] to gather additional debugging logs...
	I1009 18:46:52.850146    8358 cli_runner.go:164] Run: docker network inspect addons-514774
	W1009 18:46:52.864975    8358 cli_runner.go:211] docker network inspect addons-514774 returned with exit code 1
	I1009 18:46:52.865005    8358 network_create.go:287] error running [docker network inspect addons-514774]: docker network inspect addons-514774: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-514774 not found
	I1009 18:46:52.865018    8358 network_create.go:289] output of [docker network inspect addons-514774]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-514774 not found
	
	** /stderr **
	I1009 18:46:52.865127    8358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:52.879963    8358 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194de10}
	I1009 18:46:52.880001    8358 network_create.go:124] attempt to create docker network addons-514774 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:46:52.880054    8358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-514774 addons-514774
	I1009 18:46:52.944194    8358 network_create.go:108] docker network addons-514774 192.168.49.0/24 created
	I1009 18:46:52.944227    8358 kic.go:121] calculated static IP "192.168.49.2" for the "addons-514774" container
	I1009 18:46:52.944314    8358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:46:52.959265    8358 cli_runner.go:164] Run: docker volume create addons-514774 --label name.minikube.sigs.k8s.io=addons-514774 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:46:52.975355    8358 oci.go:103] Successfully created a docker volume addons-514774
	I1009 18:46:52.975453    8358 cli_runner.go:164] Run: docker run --rm --name addons-514774-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-514774 --entrypoint /usr/bin/test -v addons-514774:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 18:46:55.306125    8358 cli_runner.go:217] Completed: docker run --rm --name addons-514774-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-514774 --entrypoint /usr/bin/test -v addons-514774:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (2.330634635s)
	I1009 18:46:55.306157    8358 oci.go:107] Successfully prepared a docker volume addons-514774
	I1009 18:46:55.306196    8358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 18:46:55.306216    8358 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:46:55.306292    8358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-514774:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:46:59.298882    8358 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-514774:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (3.992536028s)
	I1009 18:46:59.298911    8358 kic.go:203] duration metric: took 3.992692447s to extract preloaded images to volume ...
	W1009 18:46:59.299053    8358 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:46:59.299169    8358 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:46:59.349184    8358 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-514774 --name addons-514774 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-514774 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-514774 --network addons-514774 --ip 192.168.49.2 --volume addons-514774:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 18:46:59.717614    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Running}}
	I1009 18:46:59.739331    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:46:59.760915    8358 cli_runner.go:164] Run: docker exec addons-514774 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:46:59.833567    8358 oci.go:144] the created container "addons-514774" has a running status.
	I1009 18:46:59.833597    8358 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa...
	I1009 18:47:00.415899    8358 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:47:00.439999    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:00.472952    8358 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:47:00.472978    8358 kic_runner.go:114] Args: [docker exec --privileged addons-514774 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:47:00.544048    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:00.562036    8358 machine.go:93] provisionDockerMachine start ...
	I1009 18:47:00.562127    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:00.593701    8358 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:00.593956    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:47:00.593965    8358 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:47:00.748538    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514774
	
	I1009 18:47:00.748622    8358 ubuntu.go:169] provisioning hostname "addons-514774"
	I1009 18:47:00.748733    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:00.773512    8358 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:00.773786    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:47:00.773799    8358 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514774 && echo "addons-514774" | sudo tee /etc/hostname
	I1009 18:47:00.920233    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514774
	
	I1009 18:47:00.920323    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:00.938107    8358 main.go:141] libmachine: Using SSH client type: native
	I1009 18:47:00.938338    8358 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:47:00.938358    8358 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514774' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514774/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514774' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:47:01.068944    8358 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:47:01.068986    8358 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-2290/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-2290/.minikube}
	I1009 18:47:01.069008    8358 ubuntu.go:177] setting up certificates
	I1009 18:47:01.069020    8358 provision.go:84] configureAuth start
	I1009 18:47:01.069082    8358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-514774
	I1009 18:47:01.085886    8358 provision.go:143] copyHostCerts
	I1009 18:47:01.085987    8358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem (1078 bytes)
	I1009 18:47:01.086113    8358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem (1123 bytes)
	I1009 18:47:01.086175    8358 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem (1679 bytes)
	I1009 18:47:01.086227    8358 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem org=jenkins.addons-514774 san=[127.0.0.1 192.168.49.2 addons-514774 localhost minikube]
	I1009 18:47:01.370808    8358 provision.go:177] copyRemoteCerts
	I1009 18:47:01.370878    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:47:01.370919    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:01.388208    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:01.481697    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:47:01.505309    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:47:01.528489    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:47:01.551828    8358 provision.go:87] duration metric: took 482.794443ms to configureAuth
	I1009 18:47:01.551853    8358 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:47:01.552042    8358 config.go:182] Loaded profile config "addons-514774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 18:47:01.552055    8358 machine.go:96] duration metric: took 990.001807ms to provisionDockerMachine
	I1009 18:47:01.552063    8358 client.go:171] duration metric: took 9.258423735s to LocalClient.Create
	I1009 18:47:01.552082    8358 start.go:167] duration metric: took 9.258480373s to libmachine.API.Create "addons-514774"
	I1009 18:47:01.552098    8358 start.go:293] postStartSetup for "addons-514774" (driver="docker")
	I1009 18:47:01.552108    8358 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:47:01.552160    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:47:01.552204    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:01.569059    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:01.661603    8358 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:47:01.664534    8358 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:47:01.664568    8358 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:47:01.664583    8358 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:47:01.664593    8358 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 18:47:01.664604    8358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/addons for local assets ...
	I1009 18:47:01.664678    8358 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/files for local assets ...
	I1009 18:47:01.664706    8358 start.go:296] duration metric: took 112.601942ms for postStartSetup
	I1009 18:47:01.665008    8358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-514774
	I1009 18:47:01.681129    8358 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/config.json ...
	I1009 18:47:01.681423    8358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:47:01.681472    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:01.697505    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:01.785046    8358 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:47:01.789112    8358 start.go:128] duration metric: took 9.497806035s to createHost
	I1009 18:47:01.789135    8358 start.go:83] releasing machines lock for "addons-514774", held for 9.497956358s
	I1009 18:47:01.789202    8358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-514774
	I1009 18:47:01.805246    8358 ssh_runner.go:195] Run: cat /version.json
	I1009 18:47:01.805296    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:01.805317    8358 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:47:01.805391    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:01.833919    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:01.836849    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:02.103345    8358 ssh_runner.go:195] Run: systemctl --version
	I1009 18:47:02.107544    8358 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:47:02.111763    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 18:47:02.135245    8358 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:47:02.135326    8358 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:47:02.163251    8358 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:47:02.163272    8358 start.go:495] detecting cgroup driver to use...
	I1009 18:47:02.163306    8358 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:47:02.163358    8358 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 18:47:02.176078    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 18:47:02.188064    8358 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:47:02.188126    8358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:47:02.202025    8358 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:47:02.216992    8358 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:47:02.306176    8358 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:47:02.402783    8358 docker.go:233] disabling docker service ...
	I1009 18:47:02.402848    8358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:47:02.421685    8358 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:47:02.433473    8358 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:47:02.525040    8358 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:47:02.613272    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:47:02.624328    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:47:02.640829    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1009 18:47:02.650959    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 18:47:02.661300    8358 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 18:47:02.661375    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 18:47:02.671305    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:47:02.681312    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 18:47:02.690985    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 18:47:02.700377    8358 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:47:02.709183    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 18:47:02.718942    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 18:47:02.728130    8358 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 18:47:02.737753    8358 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:47:02.746065    8358 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:47:02.746158    8358 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:47:02.759485    8358 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:47:02.767767    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:02.844603    8358 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 18:47:02.976463    8358 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 18:47:02.976571    8358 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 18:47:02.980523    8358 start.go:563] Will wait 60s for crictl version
	I1009 18:47:02.980612    8358 ssh_runner.go:195] Run: which crictl
	I1009 18:47:02.983890    8358 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:47:03.019237    8358 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1009 18:47:03.019383    8358 ssh_runner.go:195] Run: containerd --version
	I1009 18:47:03.041814    8358 ssh_runner.go:195] Run: containerd --version
	I1009 18:47:03.067382    8358 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1009 18:47:03.069067    8358 cli_runner.go:164] Run: docker network inspect addons-514774 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:47:03.084921    8358 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:47:03.088503    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:03.100389    8358 kubeadm.go:883] updating cluster {Name:addons-514774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-514774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:47:03.100513    8358 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 18:47:03.100582    8358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:03.141419    8358 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:47:03.141444    8358 containerd.go:534] Images already preloaded, skipping extraction
	I1009 18:47:03.141547    8358 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:47:03.177612    8358 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 18:47:03.177636    8358 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:47:03.177646    8358 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1009 18:47:03.177736    8358 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-514774 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-514774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:47:03.177808    8358 ssh_runner.go:195] Run: sudo crictl info
	I1009 18:47:03.215943    8358 cni.go:84] Creating CNI manager for ""
	I1009 18:47:03.215968    8358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:47:03.215979    8358 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:47:03.216028    8358 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514774 NodeName:addons-514774 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:47:03.216214    8358 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-514774"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:47:03.216308    8358 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:47:03.225060    8358 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:47:03.225131    8358 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:47:03.234168    8358 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 18:47:03.251989    8358 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:47:03.270219    8358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1009 18:47:03.288371    8358 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:47:03.291865    8358 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:47:03.303429    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:03.393272    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:03.406870    8358 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774 for IP: 192.168.49.2
	I1009 18:47:03.406934    8358 certs.go:194] generating shared ca certs ...
	I1009 18:47:03.406953    8358 certs.go:226] acquiring lock for ca certs: {Name:mke6990d9a3fb276a87991bc9cbf7d64b4192c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:03.407137    8358 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key
	I1009 18:47:04.189512    8358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt ...
	I1009 18:47:04.189544    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt: {Name:mk6173cd77dc7fe135b3c420a28b0057a01dbd4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:04.189780    8358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key ...
	I1009 18:47:04.189796    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key: {Name:mkc63cec63480ed1b49c76ab26733a10a9cadd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:04.189878    8358 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key
	I1009 18:47:04.912652    8358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.crt ...
	I1009 18:47:04.912683    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.crt: {Name:mk13f2f7c9195ca700725f9128319e829773e5b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:04.912868    8358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key ...
	I1009 18:47:04.912881    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key: {Name:mk15a63073951316b109edb97e27e3c1621a191a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:04.912962    8358 certs.go:256] generating profile certs ...
	I1009 18:47:04.913021    8358 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.key
	I1009 18:47:04.913047    8358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt with IP's: []
	I1009 18:47:05.115504    8358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt ...
	I1009 18:47:05.115533    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: {Name:mk3e41c9b27171a8157b1e985f5b62c2e603fc19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.115717    8358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.key ...
	I1009 18:47:05.115732    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.key: {Name:mk6a723865f760ef28dac4cafe7a7e6458bb6b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.115816    8358 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key.cfb4d484
	I1009 18:47:05.115837    8358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt.cfb4d484 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:47:05.590464    8358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt.cfb4d484 ...
	I1009 18:47:05.590505    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt.cfb4d484: {Name:mk11a560c15c09c1e04eb79856d4617b9a4431cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.590709    8358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key.cfb4d484 ...
	I1009 18:47:05.590724    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key.cfb4d484: {Name:mk1f0241429faa4b763a57520388a3e643a1079b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.590821    8358 certs.go:381] copying /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt.cfb4d484 -> /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt
	I1009 18:47:05.590915    8358 certs.go:385] copying /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key.cfb4d484 -> /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key
	I1009 18:47:05.590975    8358 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.key
	I1009 18:47:05.590997    8358 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.crt with IP's: []
	I1009 18:47:06.008618    8358 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.crt ...
	I1009 18:47:06.008658    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.crt: {Name:mk54290975dfd86857b6dc305fdd633d4ddb7b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:06.008843    8358 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.key ...
	I1009 18:47:06.008855    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.key: {Name:mk502ecb4c6767e50856481e3e375b4d2fa6d7a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:06.009035    8358 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:47:06.009075    8358 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:47:06.009106    8358 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:47:06.009135    8358 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem (1679 bytes)
	I1009 18:47:06.009717    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:47:06.036174    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:47:06.062620    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:47:06.086520    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 18:47:06.111007    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:47:06.135879    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 18:47:06.159476    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:47:06.183441    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:47:06.207291    8358 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:47:06.232003    8358 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:47:06.250135    8358 ssh_runner.go:195] Run: openssl version
	I1009 18:47:06.255432    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:47:06.264992    8358 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:06.268554    8358 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:06.268618    8358 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:47:06.275530    8358 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:47:06.284788    8358 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:47:06.288182    8358 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:47:06.288247    8358 kubeadm.go:392] StartCluster: {Name:addons-514774 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-514774 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:06.288335    8358 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 18:47:06.288393    8358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:47:06.325801    8358 cri.go:89] found id: ""
	I1009 18:47:06.325873    8358 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:47:06.334594    8358 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:47:06.343355    8358 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:47:06.343418    8358 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:47:06.352408    8358 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:47:06.352430    8358 kubeadm.go:157] found existing configuration files:
	
	I1009 18:47:06.352482    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:47:06.361439    8358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:47:06.361529    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:47:06.370288    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:47:06.379080    8358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:47:06.379144    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:47:06.387666    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:47:06.396509    8358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:47:06.396597    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:47:06.404996    8358 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:47:06.413908    8358 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:47:06.414003    8358 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:47:06.422618    8358 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:47:06.462185    8358 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:47:06.462248    8358 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:47:06.498067    8358 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:47:06.498143    8358 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1009 18:47:06.498185    8358 kubeadm.go:310] OS: Linux
	I1009 18:47:06.498234    8358 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 18:47:06.498287    8358 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 18:47:06.498339    8358 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 18:47:06.498392    8358 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 18:47:06.498442    8358 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 18:47:06.498500    8358 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 18:47:06.498549    8358 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 18:47:06.498602    8358 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 18:47:06.498652    8358 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 18:47:06.563631    8358 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:47:06.563746    8358 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:47:06.563843    8358 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:47:06.569154    8358 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:47:06.571790    8358 out.go:235]   - Generating certificates and keys ...
	I1009 18:47:06.572001    8358 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:47:06.572930    8358 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:47:07.103080    8358 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:47:07.403425    8358 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:47:07.872609    8358 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:47:08.552013    8358 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:47:09.027821    8358 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:47:09.028195    8358 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-514774 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:09.409056    8358 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:47:09.409409    8358 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-514774 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:10.416991    8358 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:47:10.735385    8358 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:47:11.450260    8358 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:47:11.450534    8358 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:47:11.644244    8358 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:47:12.390044    8358 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:47:12.857431    8358 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:47:13.114569    8358 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:47:13.999677    8358 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:47:14.000463    8358 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:47:14.005080    8358 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:47:14.006773    8358 out.go:235]   - Booting up control plane ...
	I1009 18:47:14.006878    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:47:14.006958    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:47:14.007772    8358 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:47:14.034670    8358 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:47:14.040781    8358 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:47:14.040840    8358 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:47:14.136146    8358 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:47:14.136272    8358 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:47:15.137611    8358 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001737958s
	I1009 18:47:15.137721    8358 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:47:20.639650    8358 kubeadm.go:310] [api-check] The API server is healthy after 5.502038647s
	I1009 18:47:20.659442    8358 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:47:20.675528    8358 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:47:20.706471    8358 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:47:20.706664    8358 kubeadm.go:310] [mark-control-plane] Marking the node addons-514774 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:47:20.718232    8358 kubeadm.go:310] [bootstrap-token] Using token: 51u397.yxel3mwgqzgtcwqm
	I1009 18:47:20.722726    8358 out.go:235]   - Configuring RBAC rules ...
	I1009 18:47:20.722854    8358 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:47:20.724877    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:47:20.733676    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:47:20.737376    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:47:20.741610    8358 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:47:20.750500    8358 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:47:21.049170    8358 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:47:21.482880    8358 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:47:22.047242    8358 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:47:22.048537    8358 kubeadm.go:310] 
	I1009 18:47:22.048613    8358 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:47:22.048623    8358 kubeadm.go:310] 
	I1009 18:47:22.048720    8358 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:47:22.048731    8358 kubeadm.go:310] 
	I1009 18:47:22.048756    8358 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:47:22.048818    8358 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:47:22.048872    8358 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:47:22.048880    8358 kubeadm.go:310] 
	I1009 18:47:22.048934    8358 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:47:22.048943    8358 kubeadm.go:310] 
	I1009 18:47:22.048990    8358 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:47:22.048998    8358 kubeadm.go:310] 
	I1009 18:47:22.049050    8358 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:47:22.049127    8358 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:47:22.049198    8358 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:47:22.049207    8358 kubeadm.go:310] 
	I1009 18:47:22.049290    8358 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:47:22.049370    8358 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:47:22.049378    8358 kubeadm.go:310] 
	I1009 18:47:22.049462    8358 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 51u397.yxel3mwgqzgtcwqm \
	I1009 18:47:22.049567    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:46ecff2404792e73c0fde7b74431755068cf24bba8856a1cc3cbe480cfe7ea71 \
	I1009 18:47:22.049593    8358 kubeadm.go:310] 	--control-plane 
	I1009 18:47:22.049598    8358 kubeadm.go:310] 
	I1009 18:47:22.049683    8358 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:47:22.049692    8358 kubeadm.go:310] 
	I1009 18:47:22.049773    8358 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 51u397.yxel3mwgqzgtcwqm \
	I1009 18:47:22.049884    8358 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:46ecff2404792e73c0fde7b74431755068cf24bba8856a1cc3cbe480cfe7ea71 
	I1009 18:47:22.053743    8358 kubeadm.go:310] W1009 18:47:06.458006    1030 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:22.054046    8358 kubeadm.go:310] W1009 18:47:06.459652    1030 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:47:22.054260    8358 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1009 18:47:22.054369    8358 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:47:22.054388    8358 cni.go:84] Creating CNI manager for ""
	I1009 18:47:22.054396    8358 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:47:22.057474    8358 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 18:47:22.060184    8358 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:47:22.064151    8358 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 18:47:22.064170    8358 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:47:22.084212    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:47:22.363533    8358 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:47:22.363675    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:22.363753    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-514774 minikube.k8s.io/updated_at=2024_10_09T18_47_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-514774 minikube.k8s.io/primary=true
	I1009 18:47:22.379851    8358 ops.go:34] apiserver oom_adj: -16
	I1009 18:47:22.486996    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:22.987591    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:23.487914    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:23.987117    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:24.487274    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:24.987155    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:25.487844    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:25.987812    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:26.487861    8358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:26.588057    8358 kubeadm.go:1113] duration metric: took 4.22443221s to wait for elevateKubeSystemPrivileges
	I1009 18:47:26.588086    8358 kubeadm.go:394] duration metric: took 20.299860822s to StartCluster
	I1009 18:47:26.588107    8358 settings.go:142] acquiring lock: {Name:mkf94bbff2baa0ab7fd6f65328728d4b59af8d85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:26.588221    8358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:47:26.588586    8358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/kubeconfig: {Name:mk88e77ecd1f863276e8fadf431093322057a8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:26.588790    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:47:26.588805    8358 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 18:47:26.589052    8358 config.go:182] Loaded profile config "addons-514774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 18:47:26.589084    8358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:47:26.589165    8358 addons.go:69] Setting yakd=true in profile "addons-514774"
	I1009 18:47:26.589180    8358 addons.go:234] Setting addon yakd=true in "addons-514774"
	I1009 18:47:26.589204    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.589667    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.590100    8358 addons.go:69] Setting metrics-server=true in profile "addons-514774"
	I1009 18:47:26.590124    8358 addons.go:234] Setting addon metrics-server=true in "addons-514774"
	I1009 18:47:26.590150    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.590576    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.593862    8358 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-514774"
	I1009 18:47:26.593932    8358 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-514774"
	I1009 18:47:26.593980    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.600037    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.594162    8358 addons.go:69] Setting registry=true in profile "addons-514774"
	I1009 18:47:26.600575    8358 addons.go:234] Setting addon registry=true in "addons-514774"
	I1009 18:47:26.600636    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.601138    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.594171    8358 addons.go:69] Setting storage-provisioner=true in profile "addons-514774"
	I1009 18:47:26.594197    8358 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-514774"
	I1009 18:47:26.594202    8358 addons.go:69] Setting volcano=true in profile "addons-514774"
	I1009 18:47:26.594206    8358 addons.go:69] Setting volumesnapshots=true in profile "addons-514774"
	I1009 18:47:26.594272    8358 out.go:177] * Verifying Kubernetes components...
	I1009 18:47:26.594408    8358 addons.go:69] Setting ingress=true in profile "addons-514774"
	I1009 18:47:26.594414    8358 addons.go:69] Setting cloud-spanner=true in profile "addons-514774"
	I1009 18:47:26.594418    8358 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-514774"
	I1009 18:47:26.594422    8358 addons.go:69] Setting default-storageclass=true in profile "addons-514774"
	I1009 18:47:26.594426    8358 addons.go:69] Setting gcp-auth=true in profile "addons-514774"
	I1009 18:47:26.594430    8358 addons.go:69] Setting inspektor-gadget=true in profile "addons-514774"
	I1009 18:47:26.594433    8358 addons.go:69] Setting ingress-dns=true in profile "addons-514774"
	I1009 18:47:26.605157    8358 addons.go:234] Setting addon ingress-dns=true in "addons-514774"
	I1009 18:47:26.605216    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.605737    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.623933    8358 addons.go:234] Setting addon storage-provisioner=true in "addons-514774"
	I1009 18:47:26.624003    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.624534    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.644830    8358 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-514774"
	I1009 18:47:26.645193    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.657812    8358 addons.go:234] Setting addon volcano=true in "addons-514774"
	I1009 18:47:26.657869    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.658334    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.680779    8358 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:47:26.665633    8358 addons.go:234] Setting addon volumesnapshots=true in "addons-514774"
	I1009 18:47:26.681029    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.681636    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.687826    8358 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:47:26.687917    8358 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:47:26.688030    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.694010    8358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:26.665666    8358 addons.go:234] Setting addon ingress=true in "addons-514774"
	I1009 18:47:26.694236    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.694835    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.665684    8358 addons.go:234] Setting addon cloud-spanner=true in "addons-514774"
	I1009 18:47:26.705409    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.706761    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.665716    8358 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-514774"
	I1009 18:47:26.712767    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.713237    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.726052    8358 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:47:26.728654    8358 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:47:26.728710    8358 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:47:26.728797    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.665726    8358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514774"
	I1009 18:47:26.738408    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.665738    8358 mustload.go:65] Loading cluster: addons-514774
	I1009 18:47:26.749344    8358 config.go:182] Loaded profile config "addons-514774": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 18:47:26.749657    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.665745    8358 addons.go:234] Setting addon inspektor-gadget=true in "addons-514774"
	I1009 18:47:26.763438    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.763928    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.822131    8358 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:47:26.825485    8358 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:26.825551    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:47:26.825646    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.837753    8358 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:47:26.837959    8358 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1009 18:47:26.838411    8358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:47:26.866503    8358 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-514774"
	I1009 18:47:26.866548    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:26.866986    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:26.879725    8358 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1009 18:47:26.886440    8358 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1009 18:47:26.886610    8358 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:47:26.886799    8358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:47:26.887100    8358 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:26.888584    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:47:26.888687    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.900192    8358 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:47:26.900215    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:47:26.900276    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.930133    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:26.930942    8358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:26.930979    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:47:26.931057    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.948330    8358 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:47:26.948475    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:47:26.952864    8358 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1009 18:47:26.955697    8358 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:26.955752    8358 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:47:26.957302    8358 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1009 18:47:26.957325    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1009 18:47:26.957394    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.976538    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:47:26.976566    8358 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:47:26.976628    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.985040    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:47:26.988998    8358 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:47:26.989024    8358 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:47:26.989100    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:26.993102    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:27.002170    8358 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:27.006389    8358 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:27.006417    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:47:27.006492    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:27.021056    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.022801    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:47:27.030670    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:47:27.032564    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.040323    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:47:27.046651    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:47:27.056776    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:47:27.059572    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:47:27.068749    8358 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:47:27.076523    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:47:27.076552    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:47:27.076626    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:27.077231    8358 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:47:27.078378    8358 addons.go:234] Setting addon default-storageclass=true in "addons-514774"
	I1009 18:47:27.078413    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:27.078881    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:27.089654    8358 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:47:27.099379    8358 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:27.099405    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:47:27.099473    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:27.099630    8358 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:47:27.102400    8358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:27.102423    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:47:27.102514    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:27.134613    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.159756    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.202260    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.204458    8358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:27.208250    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.260942    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.262401    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.274185    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.283453    8358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:27.283483    8358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:47:27.283541    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:27.283794    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.296392    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.300120    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.323605    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:27.779828    8358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:47:27.779847    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:47:27.927224    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:27.932984    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:27.982444    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:28.009535    8358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:47:28.009610    8358 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:47:28.046679    8358 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:47:28.046779    8358 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:47:28.056259    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:28.058643    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:28.087715    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1009 18:47:28.111763    8358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:47:28.111790    8358 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:47:28.115633    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:47:28.115660    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:47:28.144715    8358 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:47:28.144741    8358 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:47:28.177006    8358 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:47:28.177032    8358 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:47:28.186143    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:28.213539    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:28.336283    8358 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:28.336310    8358 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:47:28.363644    8358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:47:28.363671    8358 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:47:28.369614    8358 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:47:28.369638    8358 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:47:28.413484    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:47:28.413509    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:47:28.430068    8358 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:47:28.430094    8358 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:47:28.535654    8358 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:28.535691    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:47:28.587326    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:47:28.587352    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:47:28.603576    8358 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:47:28.603601    8358 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:47:28.707838    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:47:28.707864    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:47:28.718980    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:28.742825    8358 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:47:28.742851    8358 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:47:28.798909    8358 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:47:28.798934    8358 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:47:28.869077    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:28.886043    8358 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:28.886066    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:47:28.900285    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:47:28.900310    8358 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:47:29.049424    8358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:47:29.049450    8358 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:47:29.088803    8358 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:47:29.088842    8358 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:47:29.095032    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:29.143567    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:47:29.143643    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:47:29.241046    8358 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.402604249s)
	I1009 18:47:29.241180    8358 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:47:29.241145    8358 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.036657143s)
	I1009 18:47:29.242086    8358 node_ready.go:35] waiting up to 6m0s for node "addons-514774" to be "Ready" ...
	I1009 18:47:29.250347    8358 node_ready.go:49] node "addons-514774" has status "Ready":"True"
	I1009 18:47:29.250420    8358 node_ready.go:38] duration metric: took 8.316429ms for node "addons-514774" to be "Ready" ...
	I1009 18:47:29.250446    8358 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:47:29.272581    8358 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7bjj2" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:29.491966    8358 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:29.492033    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:47:29.508089    8358 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:47:29.508151    8358 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:47:29.641124    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:47:29.641187    8358 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:47:29.687900    8358 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:47:29.687963    8358 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:47:29.745473    8358 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-514774" context rescaled to 1 replicas
	I1009 18:47:29.782028    8358 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-7bjj2" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-7bjj2" not found
	I1009 18:47:29.782108    8358 pod_ready.go:82] duration metric: took 509.397251ms for pod "coredns-7c65d6cfc9-7bjj2" in "kube-system" namespace to be "Ready" ...
	E1009 18:47:29.782136    8358 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-7bjj2" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-7bjj2" not found
	I1009 18:47:29.782166    8358 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:30.188527    8358 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:47:30.188603    8358 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:47:30.254190    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:30.264605    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:47:30.264708    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:47:30.628185    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:47:30.628209    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:47:30.672665    8358 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:30.672739    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:47:30.975318    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:30.987621    8358 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:30.987705    8358 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:47:31.178326    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:31.859002    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:34.217666    8358 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:47:34.217754    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:34.250951    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:34.322447    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:34.906817    8358 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:47:34.983207    8358 addons.go:234] Setting addon gcp-auth=true in "addons-514774"
	I1009 18:47:34.983322    8358 host.go:66] Checking if "addons-514774" exists ...
	I1009 18:47:34.983883    8358 cli_runner.go:164] Run: docker container inspect addons-514774 --format={{.State.Status}}
	I1009 18:47:35.016326    8358 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:47:35.016385    8358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-514774
	I1009 18:47:35.050937    8358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/addons-514774/id_rsa Username:docker}
	I1009 18:47:35.884687    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.951667842s)
	I1009 18:47:35.884788    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.826056839s)
	I1009 18:47:35.884717    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.902194927s)
	I1009 18:47:35.884750    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.828429526s)
	I1009 18:47:35.884955    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.95764913s)
	I1009 18:47:35.884987    8358 addons.go:475] Verifying addon ingress=true in "addons-514774"
	I1009 18:47:35.886628    8358 out.go:177] * Verifying ingress addon...
	I1009 18:47:35.889128    8358 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:47:35.895265    8358 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:47:35.895295    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.397199    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.798045    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:36.911566    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.400557    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.931197    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.955125    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.867373918s)
	I1009 18:47:37.955229    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.76906257s)
	I1009 18:47:37.955300    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.741734641s)
	I1009 18:47:37.955560    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.236554306s)
	I1009 18:47:37.955608    8358 addons.go:475] Verifying addon metrics-server=true in "addons-514774"
	I1009 18:47:37.955653    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.086548993s)
	I1009 18:47:37.955694    8358 addons.go:475] Verifying addon registry=true in "addons-514774"
	I1009 18:47:37.956011    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.860889905s)
	I1009 18:47:37.956183    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.701928719s)
	W1009 18:47:37.956218    8358 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:37.956238    8358 retry.go:31] will retry after 345.372879ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:37.956302    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.980879255s)
	I1009 18:47:37.958987    8358 out.go:177] * Verifying registry addon...
	I1009 18:47:37.959150    8358 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-514774 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:47:37.962602    8358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:47:38.063063    8358 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:38.063135    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.302723    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:38.429249    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.470166    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.519927    8358 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.503542898s)
	I1009 18:47:38.520099    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.341692474s)
	I1009 18:47:38.520192    8358 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-514774"
	I1009 18:47:38.523013    8358 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:38.523106    8358 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:47:38.528446    8358 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:47:38.529483    8358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:47:38.531920    8358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:47:38.532031    8358 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:47:38.537820    8358 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:38.537891    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:38.572449    8358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:47:38.572520    8358 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:47:38.593412    8358 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:38.593481    8358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:47:38.613547    8358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:38.893992    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.966672    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.034245    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.307451    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:39.414630    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.503635    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.535499    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.895685    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.920103    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.306471518s)
	I1009 18:47:39.921215    8358 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.618397967s)
	I1009 18:47:39.923366    8358 addons.go:475] Verifying addon gcp-auth=true in "addons-514774"
	I1009 18:47:39.928666    8358 out.go:177] * Verifying gcp-auth addon...
	I1009 18:47:39.932466    8358 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:47:39.934901    8358 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:47:39.994352    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.034444    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.394416    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.494572    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.536824    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.894483    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.967418    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.034581    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.402985    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.466412    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.534588    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.790222    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:41.894033    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.993876    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.034792    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.394315    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.466540    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.534748    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.918099    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.969184    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.035136    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.394715    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.466819    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.535081    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.893412    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.966864    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.034471    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.288970    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:44.396420    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.468152    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.535405    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.894544    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.994788    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.034831    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.394494    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.472965    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.534700    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.893446    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.966634    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.035668    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.289722    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:46.394348    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.467357    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.534919    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.894923    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.966275    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.035761    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.394239    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.466616    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.534964    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.896136    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.996028    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.036338    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.393591    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.466705    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.534501    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.788928    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:48.894219    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.965835    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.036033    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.394080    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.466733    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.534750    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.894894    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.966829    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.035932    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.394277    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.466946    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.535740    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.894439    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.966959    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.035844    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.289681    8358 pod_ready.go:103] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:51.394455    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.467705    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.536677    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.789654    8358 pod_ready.go:93] pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:51.789729    8358 pod_ready.go:82] duration metric: took 22.007537449s for pod "coredns-7c65d6cfc9-b864v" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.789756    8358 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.798208    8358 pod_ready.go:93] pod "etcd-addons-514774" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:51.798282    8358 pod_ready.go:82] duration metric: took 8.50109ms for pod "etcd-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.798312    8358 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.804420    8358 pod_ready.go:93] pod "kube-apiserver-addons-514774" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:51.804490    8358 pod_ready.go:82] duration metric: took 6.156661ms for pod "kube-apiserver-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.804517    8358 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.810824    8358 pod_ready.go:93] pod "kube-controller-manager-addons-514774" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:51.810898    8358 pod_ready.go:82] duration metric: took 6.359027ms for pod "kube-controller-manager-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.810925    8358 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pzjbl" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.822461    8358 pod_ready.go:93] pod "kube-proxy-pzjbl" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:51.822532    8358 pod_ready.go:82] duration metric: took 11.575818ms for pod "kube-proxy-pzjbl" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.822558    8358 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:51.893851    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.967303    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.036004    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.186136    8358 pod_ready.go:93] pod "kube-scheduler-addons-514774" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:52.186163    8358 pod_ready.go:82] duration metric: took 363.584322ms for pod "kube-scheduler-addons-514774" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:52.186173    8358 pod_ready.go:39] duration metric: took 22.935702608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:47:52.186188    8358 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:47:52.186255    8358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:47:52.200540    8358 api_server.go:72] duration metric: took 25.61170757s to wait for apiserver process to appear ...
	I1009 18:47:52.200564    8358 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:47:52.200586    8358 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:47:52.209509    8358 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:47:52.210573    8358 api_server.go:141] control plane version: v1.31.1
	I1009 18:47:52.210603    8358 api_server.go:131] duration metric: took 10.030856ms to wait for apiserver health ...
	I1009 18:47:52.210612    8358 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:47:52.400984    8358 system_pods.go:59] 18 kube-system pods found
	I1009 18:47:52.401027    8358 system_pods.go:61] "coredns-7c65d6cfc9-b864v" [d1370b60-a7f3-4c0b-9485-a5d9bf4618ce] Running
	I1009 18:47:52.401063    8358 system_pods.go:61] "csi-hostpath-attacher-0" [a142d20c-2e35-402b-bc40-5ca90fe0c730] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:47:52.401082    8358 system_pods.go:61] "csi-hostpath-resizer-0" [fca4c208-6320-4147-8e10-cdf33914ef7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:47:52.401098    8358 system_pods.go:61] "csi-hostpathplugin-48rvj" [daa20e1d-76df-4e80-82f5-2058f1f0bd87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:47:52.401115    8358 system_pods.go:61] "etcd-addons-514774" [1d3b0cb6-128c-4705-aa1c-45caa22a4799] Running
	I1009 18:47:52.401120    8358 system_pods.go:61] "kindnet-xp52s" [26860463-a9dd-41ae-8023-d25d011d93ff] Running
	I1009 18:47:52.401125    8358 system_pods.go:61] "kube-apiserver-addons-514774" [de2ce5c5-f05a-4eaf-93b6-1c9c779b7957] Running
	I1009 18:47:52.401159    8358 system_pods.go:61] "kube-controller-manager-addons-514774" [08631970-6053-40e7-bf8d-f9eeb102d3d6] Running
	I1009 18:47:52.401177    8358 system_pods.go:61] "kube-ingress-dns-minikube" [568c24e2-e93a-49da-b1d2-0f53a924cf79] Running
	I1009 18:47:52.401182    8358 system_pods.go:61] "kube-proxy-pzjbl" [19bbea14-0fe3-45ee-9cc9-c84473ca6407] Running
	I1009 18:47:52.401186    8358 system_pods.go:61] "kube-scheduler-addons-514774" [76317f3c-b6dc-4fed-92dd-52cc94e401a5] Running
	I1009 18:47:52.401200    8358 system_pods.go:61] "metrics-server-84c5f94fbc-rtf2q" [af6559a7-0300-48fd-8898-9f5ab34e7686] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:47:52.401220    8358 system_pods.go:61] "nvidia-device-plugin-daemonset-b26r7" [f747350f-cab4-4932-aed7-6d57e3d8ab71] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:47:52.401253    8358 system_pods.go:61] "registry-66c9cd494c-cbzc4" [243b50be-9240-4b5c-b75e-00643fe07edd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:47:52.401271    8358 system_pods.go:61] "registry-proxy-dthqj" [791df215-51d1-4c9b-969e-92e1807ed15a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:47:52.401285    8358 system_pods.go:61] "snapshot-controller-56fcc65765-4jm6h" [5d0ae29d-9bb2-4960-a51c-8cb1ffb491d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:47:52.401292    8358 system_pods.go:61] "snapshot-controller-56fcc65765-z92ww" [8154a10c-8347-4c9b-83c1-387360213472] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:47:52.401308    8358 system_pods.go:61] "storage-provisioner" [b5a0d627-5081-41cf-81b9-627e9721aaa0] Running
	I1009 18:47:52.401315    8358 system_pods.go:74] duration metric: took 190.697136ms to wait for pod list to return data ...
	I1009 18:47:52.401340    8358 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:47:52.406907    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.487968    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.534902    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.586591    8358 default_sa.go:45] found service account: "default"
	I1009 18:47:52.586619    8358 default_sa.go:55] duration metric: took 185.271997ms for default service account to be created ...
	I1009 18:47:52.586629    8358 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:47:52.793379    8358 system_pods.go:86] 18 kube-system pods found
	I1009 18:47:52.793464    8358 system_pods.go:89] "coredns-7c65d6cfc9-b864v" [d1370b60-a7f3-4c0b-9485-a5d9bf4618ce] Running
	I1009 18:47:52.793491    8358 system_pods.go:89] "csi-hostpath-attacher-0" [a142d20c-2e35-402b-bc40-5ca90fe0c730] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:47:52.793533    8358 system_pods.go:89] "csi-hostpath-resizer-0" [fca4c208-6320-4147-8e10-cdf33914ef7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:47:52.793560    8358 system_pods.go:89] "csi-hostpathplugin-48rvj" [daa20e1d-76df-4e80-82f5-2058f1f0bd87] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:47:52.793613    8358 system_pods.go:89] "etcd-addons-514774" [1d3b0cb6-128c-4705-aa1c-45caa22a4799] Running
	I1009 18:47:52.793640    8358 system_pods.go:89] "kindnet-xp52s" [26860463-a9dd-41ae-8023-d25d011d93ff] Running
	I1009 18:47:52.793662    8358 system_pods.go:89] "kube-apiserver-addons-514774" [de2ce5c5-f05a-4eaf-93b6-1c9c779b7957] Running
	I1009 18:47:52.793684    8358 system_pods.go:89] "kube-controller-manager-addons-514774" [08631970-6053-40e7-bf8d-f9eeb102d3d6] Running
	I1009 18:47:52.793716    8358 system_pods.go:89] "kube-ingress-dns-minikube" [568c24e2-e93a-49da-b1d2-0f53a924cf79] Running
	I1009 18:47:52.793736    8358 system_pods.go:89] "kube-proxy-pzjbl" [19bbea14-0fe3-45ee-9cc9-c84473ca6407] Running
	I1009 18:47:52.793754    8358 system_pods.go:89] "kube-scheduler-addons-514774" [76317f3c-b6dc-4fed-92dd-52cc94e401a5] Running
	I1009 18:47:52.793778    8358 system_pods.go:89] "metrics-server-84c5f94fbc-rtf2q" [af6559a7-0300-48fd-8898-9f5ab34e7686] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:47:52.793799    8358 system_pods.go:89] "nvidia-device-plugin-daemonset-b26r7" [f747350f-cab4-4932-aed7-6d57e3d8ab71] Running
	I1009 18:47:52.793832    8358 system_pods.go:89] "registry-66c9cd494c-cbzc4" [243b50be-9240-4b5c-b75e-00643fe07edd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:47:52.793854    8358 system_pods.go:89] "registry-proxy-dthqj" [791df215-51d1-4c9b-969e-92e1807ed15a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:47:52.793879    8358 system_pods.go:89] "snapshot-controller-56fcc65765-4jm6h" [5d0ae29d-9bb2-4960-a51c-8cb1ffb491d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:47:52.793914    8358 system_pods.go:89] "snapshot-controller-56fcc65765-z92ww" [8154a10c-8347-4c9b-83c1-387360213472] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:47:52.793943    8358 system_pods.go:89] "storage-provisioner" [b5a0d627-5081-41cf-81b9-627e9721aaa0] Running
	I1009 18:47:52.793969    8358 system_pods.go:126] duration metric: took 207.332586ms to wait for k8s-apps to be running ...
	I1009 18:47:52.793991    8358 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:47:52.794075    8358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:47:52.809558    8358 system_svc.go:56] duration metric: took 15.557621ms WaitForService to wait for kubelet
	I1009 18:47:52.809584    8358 kubeadm.go:582] duration metric: took 26.220755982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:47:52.809605    8358 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:47:52.893766    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.967151    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.986755    8358 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 18:47:52.986789    8358 node_conditions.go:123] node cpu capacity is 2
	I1009 18:47:52.986802    8358 node_conditions.go:105] duration metric: took 177.191313ms to run NodePressure ...
	I1009 18:47:52.986834    8358 start.go:241] waiting for startup goroutines ...
	I1009 18:47:53.034852    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.394544    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.494942    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.534441    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.893944    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.967008    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.038663    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.395412    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.466211    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.537267    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.893883    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.966475    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.034473    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.394553    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.466986    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.534562    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.895896    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.966435    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.049082    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.414924    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.501676    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.535527    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.894091    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.966077    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.035256    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.394141    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.466360    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.534124    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.893598    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.966657    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.034541    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.394278    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.466277    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.534299    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.893826    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.966853    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.035476    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.394073    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.467123    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.535322    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.894465    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.967023    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.060099    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.395578    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.467506    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.535601    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.894123    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.966339    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.034715    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.394425    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.466531    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:01.535010    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.895208    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.967998    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.036415    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.395201    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.496298    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:02.536055    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.897670    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.966852    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.038290    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.394294    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.467837    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:03.535977    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.896435    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.967073    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.035844    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.394987    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.497111    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:04.597693    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.894413    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.966597    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.034737    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.393715    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.466422    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:05.534817    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.894059    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.966450    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.034809    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.393093    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.468394    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:06.537037    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.893578    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.967433    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.035264    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.394229    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.466078    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:07.534869    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.896487    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:07.967703    8358 kapi.go:107] duration metric: took 30.005096987s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:48:08.034463    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.394341    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:08.536607    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.895134    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.043150    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.394175    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:09.534968    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.893472    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.035348    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.393604    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:10.534935    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.895105    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.035500    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.394177    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:11.536960    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.896755    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.038853    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.393933    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:12.534015    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:12.894715    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.034973    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.405666    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:13.572103    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:13.893699    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.037142    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.394632    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:14.535089    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:14.895837    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.036080    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.393595    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:15.535455    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:15.894139    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.034189    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.393930    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:16.535023    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:16.893568    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.034050    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.394306    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:17.534896    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:17.894654    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.036567    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.393338    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:18.534164    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:18.894275    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.034165    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.411626    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:19.534280    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:19.893349    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.036039    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.394239    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:20.534286    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:20.894492    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.034771    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.393600    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:21.535104    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:21.893552    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.035106    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.393678    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:22.535324    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:22.893851    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.034703    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:23.395096    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:23.534767    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:23.894787    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.039528    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.395075    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:24.534863    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:24.894351    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.035074    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.393535    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:25.534342    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:25.893879    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.034593    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.393664    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:26.534553    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:26.893913    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.034135    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.393900    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:27.534217    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:27.894065    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.035408    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.393739    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:28.535289    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:28.903143    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.033741    8358 kapi.go:107] duration metric: took 50.504266015s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:48:29.393503    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:29.893415    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.394020    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:30.893875    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.393364    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:31.893567    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.393305    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.894101    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.393711    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.893682    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.396463    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.893739    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.394013    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.893673    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.393678    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.893825    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.393650    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.893652    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.393433    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.893971    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.393849    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.893213    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.394004    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.894312    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.394002    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.899464    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.394748    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.894193    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.393622    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.893647    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.393478    8358 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.907462    8358 kapi.go:107] duration metric: took 1m9.018319435s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:49:01.949342    8358 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:49:01.949371    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.436186    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.936128    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.435536    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.936443    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.436849    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.936901    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.436482    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.936030    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.436053    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.936690    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.436132    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.936202    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.436249    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.935547    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.436742    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.936562    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.436360    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.936872    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.436560    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.936354    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.436077    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.935598    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.436350    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.936762    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.438108    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.936233    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.436000    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.935885    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.436513    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.936727    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.436403    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.937018    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.435753    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.936501    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.436363    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.938357    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.435767    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.937145    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.436143    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.935888    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.435609    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.936240    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.436189    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.935937    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.438113    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.935822    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.435838    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.936510    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.436000    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.937751    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.436464    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.935866    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.436531    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.936056    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.436045    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.936758    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.436204    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.936230    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.435611    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.936625    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.436296    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.936375    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.435834    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.935886    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.437039    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.936382    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.435772    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.936703    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.436334    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.936008    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.436144    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.936830    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.436532    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.936360    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.436489    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.935900    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.436595    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.936179    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.436021    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.935752    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.436926    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.936703    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.436604    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.936416    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.437890    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.937104    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.435727    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.936785    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.437112    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.936880    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.436574    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.936212    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.435822    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.936428    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.437511    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.936923    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.435993    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.936737    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:51.436263    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:51.935896    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:52.436912    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:52.936609    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:53.435509    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:53.936462    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:54.436120    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:54.936000    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:55.440768    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:55.936958    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:56.435527    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:56.936018    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:57.435721    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:57.936764    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:58.436281    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:58.935958    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:59.437301    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:59.935824    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:00.436840    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:00.936708    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:01.436727    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:01.936719    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:02.436392    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:02.935793    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:03.436378    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:03.936333    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:04.435933    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:04.936960    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:05.436213    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:05.935893    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:06.437100    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:06.936294    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:07.436459    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:07.936321    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:08.437741    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:08.935834    8358 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:50:09.440554    8358 kapi.go:107] duration metric: took 2m29.508090648s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:50:09.442328    8358 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514774 cluster.
	I1009 18:50:09.444492    8358 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:50:09.445709    8358 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:50:09.448307    8358 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, storage-provisioner-rancher, volcano, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:50:09.449857    8358 addons.go:510] duration metric: took 2m42.860768444s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns storage-provisioner-rancher volcano nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:50:09.449909    8358 start.go:246] waiting for cluster config update ...
	I1009 18:50:09.449929    8358 start.go:255] writing updated cluster config ...
	I1009 18:50:09.450626    8358 ssh_runner.go:195] Run: rm -f paused
	I1009 18:50:09.811604    8358 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:50:09.813598    8358 out.go:177] * Done! kubectl is now configured to use "addons-514774" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	52d71822dfff8       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   ff7c56d1f7f3d       gcp-auth-89d5ffd79-699vs
	9091e5d0d7186       1a9605c872c1d       4 minutes ago       Running             admission                                0                   0f352dbba2e78       volcano-admission-5874dfdd79-82wcx
	8edb315a52c2a       289a818c8d9c5       4 minutes ago       Running             controller                               0                   3e6adfafc15ec       ingress-nginx-controller-bc57996ff-zbzdk
	ac24126176623       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	fb7490aa23afe       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	1e147462e6373       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	f98dc56ee0843       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	b253bc40df4e5       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	b93e13d73ae00       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   47af64342d5f9       csi-hostpath-resizer-0
	f0ab617345e3c       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   b0d9626c59c26       csi-hostpath-attacher-0
	f5101666bc249       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   d68bc20f0a906       volcano-scheduler-6c9778cbdf-9j66b
	dfcaaa3b34933       420193b27261a       5 minutes ago       Exited              patch                                    0                   003f621812622       ingress-nginx-admission-patch-z8k9d
	c55739710398a       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   61c1082e11111       local-path-provisioner-86d989889c-hfs65
	982d002b85942       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   1ac38e92b34ba       csi-hostpathplugin-48rvj
	f705c31832fa8       420193b27261a       5 minutes ago       Exited              create                                   0                   a8fe605971a11       ingress-nginx-admission-create-6b6kk
	5d7073b79ff79       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   82ba7a6d4ee6b       volcano-controllers-789ffc5785-7gf79
	cb930155ed173       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   d4194419cc5da       registry-proxy-dthqj
	ab02ed9f04617       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   21d2048d83134       snapshot-controller-56fcc65765-4jm6h
	cae58f512b63b       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   cfab439775bb7       snapshot-controller-56fcc65765-z92ww
	6bd38550f986c       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   c2a1cce1d5b8b       metrics-server-84c5f94fbc-rtf2q
	2038a3ef1483e       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   1a7447df23121       cloud-spanner-emulator-5b584cc74-kr752
	3c49344a9769c       77bdba588b953       5 minutes ago       Running             yakd                                     0                   ebb815569f799       yakd-dashboard-67d98fc6b-85nlx
	ed3a1a775a493       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   6bafa62876c1d       registry-66c9cd494c-cbzc4
	25a43d76eb935       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   eeb8262ca2f13       nvidia-device-plugin-daemonset-b26r7
	796897336f750       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   86135f174415e       coredns-7c65d6cfc9-b864v
	f67277b424040       68de1ddeaded8       5 minutes ago       Running             gadget                                   0                   ac40d82ed20ba       gadget-dtfmd
	3c8fc961255a4       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   57aa5245e3ec7       kube-ingress-dns-minikube
	f7d18090af72a       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   998937d71ceca       storage-provisioner
	2fd4618be9497       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                              0                   a725cdf048fdc       kindnet-xp52s
	929fdd2cb5176       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   6c1da1ea08df7       kube-proxy-pzjbl
	9d2b5f248b4fd       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   7935d8325bb80       kube-scheduler-addons-514774
	d1d139b6f7ecc       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   d71cdddb361c3       kube-apiserver-addons-514774
	5064a62b9fcb6       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   417320ff92f70       kube-controller-manager-addons-514774
	f9afa14b9e381       27e3830e14027       6 minutes ago       Running             etcd                                     0                   765cd7b76499d       etcd-addons-514774
	
	
	==> containerd <==
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.417780795Z" level=info msg="TearDown network for sandbox \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\" successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.417820031Z" level=info msg="StopPodSandbox for \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\" returns successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.418359951Z" level=info msg="RemovePodSandbox for \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\""
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.418488752Z" level=info msg="Forcibly stopping sandbox \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\""
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.427557190Z" level=info msg="TearDown network for sandbox \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\" successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.435145174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.435418580Z" level=info msg="RemovePodSandbox \"1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1\" returns successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.436067617Z" level=info msg="StopPodSandbox for \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\""
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.445465493Z" level=info msg="TearDown network for sandbox \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\" successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.445503950Z" level=info msg="StopPodSandbox for \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\" returns successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.445991505Z" level=info msg="RemovePodSandbox for \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\""
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.446119642Z" level=info msg="Forcibly stopping sandbox \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\""
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.454325788Z" level=info msg="TearDown network for sandbox \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\" successfully"
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.465406657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 09 18:50:21 addons-514774 containerd[822]: time="2024-10-09T18:50:21.465534967Z" level=info msg="RemovePodSandbox \"1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55\" returns successfully"
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.469173973Z" level=info msg="RemoveContainer for \"afcbc75e1fc7cf558036fa9aa1c0d197b8fd2540807a71eac1bd751948c4973b\""
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.475829668Z" level=info msg="RemoveContainer for \"afcbc75e1fc7cf558036fa9aa1c0d197b8fd2540807a71eac1bd751948c4973b\" returns successfully"
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.477792123Z" level=info msg="StopPodSandbox for \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\""
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.486192915Z" level=info msg="TearDown network for sandbox \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\" successfully"
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.486234637Z" level=info msg="StopPodSandbox for \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\" returns successfully"
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.487009350Z" level=info msg="RemovePodSandbox for \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\""
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.487061952Z" level=info msg="Forcibly stopping sandbox \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\""
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.495058888Z" level=info msg="TearDown network for sandbox \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\" successfully"
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.501891088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 09 18:51:21 addons-514774 containerd[822]: time="2024-10-09T18:51:21.502010585Z" level=info msg="RemovePodSandbox \"7eba2a905d8959fcc4b8891cf01bdb5e951ccc3802952b3ef8cf5f4e429e89ba\" returns successfully"
	
	
	==> coredns [796897336f7507f1d892c6d0dc06d3d4b3661e533c4b1dda086b9fbf8f042ed5] <==
	[INFO] 10.244.0.8:38714 - 28849 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000100068s
	[INFO] 10.244.0.8:38714 - 2148 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.005501396s
	[INFO] 10.244.0.8:38714 - 901 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00576164s
	[INFO] 10.244.0.8:38714 - 62673 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119481s
	[INFO] 10.244.0.8:38714 - 24107 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00060222s
	[INFO] 10.244.0.8:39221 - 37455 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091756s
	[INFO] 10.244.0.8:39221 - 37693 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000115354s
	[INFO] 10.244.0.8:39700 - 980 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052397s
	[INFO] 10.244.0.8:39700 - 518 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00017694s
	[INFO] 10.244.0.8:38169 - 43805 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060176s
	[INFO] 10.244.0.8:38169 - 43576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089811s
	[INFO] 10.244.0.8:40069 - 1455 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001471371s
	[INFO] 10.244.0.8:40069 - 1677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001243471s
	[INFO] 10.244.0.8:52397 - 29565 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077939s
	[INFO] 10.244.0.8:52397 - 29385 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077783s
	[INFO] 10.244.0.24:53879 - 32106 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000337617s
	[INFO] 10.244.0.24:55610 - 42906 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000357589s
	[INFO] 10.244.0.24:47066 - 9993 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012122s
	[INFO] 10.244.0.24:45353 - 49066 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174224s
	[INFO] 10.244.0.24:55403 - 30335 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120112s
	[INFO] 10.244.0.24:53023 - 27976 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156067s
	[INFO] 10.244.0.24:40057 - 56131 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003429962s
	[INFO] 10.244.0.24:41972 - 44644 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003826852s
	[INFO] 10.244.0.24:54457 - 11607 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005738126s
	[INFO] 10.244.0.24:48656 - 24768 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006684937s
	
	
	==> describe nodes <==
	Name:               addons-514774
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514774
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-514774
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_47_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-514774
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-514774"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:47:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514774
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 18:53:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 18:50:25 +0000   Wed, 09 Oct 2024 18:47:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 18:50:25 +0000   Wed, 09 Oct 2024 18:47:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 18:50:25 +0000   Wed, 09 Oct 2024 18:47:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 18:50:25 +0000   Wed, 09 Oct 2024 18:47:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-514774
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c3307efe1f548ea80e78cb516969a0c
	  System UUID:                f800d1d8-b8a5-44e5-8b16-2763a4e9655e
	  Boot ID:                    82386538-14d4-4a77-b4cb-0988d545cff7
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-kr752      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  gadget                      gadget-dtfmd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-699vs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-zbzdk    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m53s
	  kube-system                 coredns-7c65d6cfc9-b864v                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-48rvj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 etcd-addons-514774                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-xp52s                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-514774                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-514774       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-proxy-pzjbl                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-514774                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-rtf2q             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-b26r7        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-66c9cd494c-cbzc4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-dthqj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-4jm6h        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-z92ww        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-hfs65     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-admission-5874dfdd79-82wcx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-controllers-789ffc5785-7gf79        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  volcano-system              volcano-scheduler-6c9778cbdf-9j66b          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-85nlx              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node addons-514774 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m14s)  kubelet          Node addons-514774 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node addons-514774 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-514774 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-514774 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-514774 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-514774 event: Registered Node addons-514774 in Controller
	
	
	==> dmesg <==
	[Oct 9 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015212] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.462139] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.053294] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014996] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.652682] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.112018] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [f9afa14b9e381a008763ce3c271e307729e6eec3751a7516d388b7c244110bd6] <==
	{"level":"info","ts":"2024-10-09T18:47:15.551499Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T18:47:15.551596Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-09T18:47:15.551607Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-09T18:47:15.552321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T18:47:15.552871Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T18:47:16.136709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:16.136951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:16.137087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-09T18:47:16.137189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:16.137283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:16.137380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:16.137454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-09T18:47:16.144746Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:16.147567Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-514774 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T18:47:16.147898Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:16.147983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:16.148105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:16.148203Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T18:47:16.148391Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:16.148492Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T18:47:16.148580Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T18:47:16.149168Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:16.149363Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:47:16.161584Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-09T18:47:16.166280Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [52d71822dfff8b3d61f1001f0fab38f07086ddcbabd1e431949c35be92b9188a] <==
	2024/10/09 18:50:09 GCP Auth Webhook started!
	2024/10/09 18:50:26 Ready to marshal response ...
	2024/10/09 18:50:26 Ready to write response ...
	2024/10/09 18:50:27 Ready to marshal response ...
	2024/10/09 18:50:27 Ready to write response ...
	
	
	==> kernel <==
	 18:53:28 up 35 min,  0 users,  load average: 0.11, 0.65, 0.45
	Linux addons-514774 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2fd4618be9497287063b000def392b6bafbd952513ca2ad8bc775efc966651a4] <==
	I1009 18:51:21.004523       1 main.go:300] handling current node
	I1009 18:51:31.001777       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:51:31.001810       1 main.go:300] handling current node
	I1009 18:51:41.008525       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:51:41.008560       1 main.go:300] handling current node
	I1009 18:51:51.009192       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:51:51.009228       1 main.go:300] handling current node
	I1009 18:52:01.006152       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:01.006189       1 main.go:300] handling current node
	I1009 18:52:11.005208       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:11.005245       1 main.go:300] handling current node
	I1009 18:52:21.005412       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:21.005454       1 main.go:300] handling current node
	I1009 18:52:31.002643       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:31.002679       1 main.go:300] handling current node
	I1009 18:52:41.008698       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:41.008907       1 main.go:300] handling current node
	I1009 18:52:51.007623       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:52:51.007658       1 main.go:300] handling current node
	I1009 18:53:01.010402       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:53:01.010434       1 main.go:300] handling current node
	I1009 18:53:11.005539       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:53:11.005577       1 main.go:300] handling current node
	I1009 18:53:21.006678       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:53:21.006711       1 main.go:300] handling current node
	
	
	==> kube-apiserver [d1d139b6f7ecca01d3752ce0652e9089460270d6846515217b588189976467f5] <==
	W1009 18:48:41.402121       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:42.431698       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:42.795824       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.202.2:443: connect: connection refused
	E1009 18:48:42.795862       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.202.2:443: connect: connection refused" logger="UnhandledError"
	W1009 18:48:42.797469       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:42.866581       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.202.2:443: connect: connection refused
	E1009 18:48:42.866622       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.202.2:443: connect: connection refused" logger="UnhandledError"
	W1009 18:48:42.868183       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:43.487587       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:44.503093       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:45.511451       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:46.587509       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:47.684822       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:48.746065       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:49.756994       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:50.760607       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:48:51.848675       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.202.106:443: connect: connection refused
	W1009 18:49:01.774575       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.202.2:443: connect: connection refused
	E1009 18:49:01.774616       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.202.2:443: connect: connection refused" logger="UnhandledError"
	W1009 18:49:42.806134       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.202.2:443: connect: connection refused
	E1009 18:49:42.806179       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.202.2:443: connect: connection refused" logger="UnhandledError"
	W1009 18:49:42.876184       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.202.2:443: connect: connection refused
	E1009 18:49:42.876223       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.202.2:443: connect: connection refused" logger="UnhandledError"
	I1009 18:50:26.401467       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1009 18:50:26.449328       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [5064a62b9fcb6dd09bd7fb884be0a71efe0abbf39f382850c34e0018d7d982a0] <==
	I1009 18:49:42.824002       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:42.833585       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:42.845223       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:42.884884       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:42.901633       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:42.901789       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:42.914412       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:44.011292       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:44.025303       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:45.148765       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:45.171304       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:46.156273       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:46.166549       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:46.175519       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1009 18:49:46.178796       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:46.188520       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:49:46.196066       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1009 18:50:09.103725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="12.614705ms"
	I1009 18:50:09.103859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="93.75µs"
	I1009 18:50:16.021779       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1009 18:50:16.027341       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1009 18:50:16.073056       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1009 18:50:16.075914       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1009 18:50:25.157687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-514774"
	I1009 18:50:26.094263       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [929fdd2cb51763117ab17e3eaa8d91935bb6bd6c5da168d4654e35ea4bd4fea7] <==
	I1009 18:47:27.878100       1 server_linux.go:66] "Using iptables proxy"
	I1009 18:47:27.953465       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1009 18:47:27.953524       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:47:27.991248       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:47:27.991303       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:47:27.993503       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:47:28.002585       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:47:28.002613       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:47:28.014582       1 config.go:199] "Starting service config controller"
	I1009 18:47:28.014636       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:47:28.014789       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:47:28.014803       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:47:28.023999       1 config.go:328] "Starting node config controller"
	I1009 18:47:28.024023       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:47:28.114846       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:47:28.115128       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 18:47:28.128363       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9d2b5f248b4fda6d4e46f7cfeb2d9780551370b16d851108fdb01e28bff818e2] <==
	W1009 18:47:19.674377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:47:19.674395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.674442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 18:47:19.674457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.674750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:19.674774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.675045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 18:47:19.675070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:19.677053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:19.677183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677225       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 18:47:19.677238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:47:19.677352       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 18:47:19.677438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 18:47:19.677488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 18:47:19.677575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:47:19.677637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:47:19.677650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1009 18:47:20.868505       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 18:49:43 addons-514774 kubelet[1501]: I1009 18:49:43.092879    1501 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrdxw\" (UniqueName: \"kubernetes.io/projected/b135775e-0e65-46bb-b0ba-d4714ce35a3b-kube-api-access-hrdxw\") pod \"gcp-auth-certs-patch-5tkg9\" (UID: \"b135775e-0e65-46bb-b0ba-d4714ce35a3b\") " pod="gcp-auth/gcp-auth-certs-patch-5tkg9"
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.311027    1501 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrdxw\" (UniqueName: \"kubernetes.io/projected/b135775e-0e65-46bb-b0ba-d4714ce35a3b-kube-api-access-hrdxw\") pod \"b135775e-0e65-46bb-b0ba-d4714ce35a3b\" (UID: \"b135775e-0e65-46bb-b0ba-d4714ce35a3b\") "
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.311110    1501 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4wlq\" (UniqueName: \"kubernetes.io/projected/5d93c137-aabc-4e04-83d6-55467c56bdcc-kube-api-access-p4wlq\") pod \"5d93c137-aabc-4e04-83d6-55467c56bdcc\" (UID: \"5d93c137-aabc-4e04-83d6-55467c56bdcc\") "
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.313077    1501 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d93c137-aabc-4e04-83d6-55467c56bdcc-kube-api-access-p4wlq" (OuterVolumeSpecName: "kube-api-access-p4wlq") pod "5d93c137-aabc-4e04-83d6-55467c56bdcc" (UID: "5d93c137-aabc-4e04-83d6-55467c56bdcc"). InnerVolumeSpecName "kube-api-access-p4wlq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.313826    1501 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b135775e-0e65-46bb-b0ba-d4714ce35a3b-kube-api-access-hrdxw" (OuterVolumeSpecName: "kube-api-access-hrdxw") pod "b135775e-0e65-46bb-b0ba-d4714ce35a3b" (UID: "b135775e-0e65-46bb-b0ba-d4714ce35a3b"). InnerVolumeSpecName "kube-api-access-hrdxw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.412137    1501 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p4wlq\" (UniqueName: \"kubernetes.io/projected/5d93c137-aabc-4e04-83d6-55467c56bdcc-kube-api-access-p4wlq\") on node \"addons-514774\" DevicePath \"\""
	Oct 09 18:49:45 addons-514774 kubelet[1501]: I1009 18:49:45.412181    1501 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hrdxw\" (UniqueName: \"kubernetes.io/projected/b135775e-0e65-46bb-b0ba-d4714ce35a3b-kube-api-access-hrdxw\") on node \"addons-514774\" DevicePath \"\""
	Oct 09 18:49:46 addons-514774 kubelet[1501]: I1009 18:49:46.010642    1501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ba67d4f40f54aaeb44cb09e33a897f2046f4a0232e7ef043e4edcc8abb4e6f1"
	Oct 09 18:49:46 addons-514774 kubelet[1501]: I1009 18:49:46.013073    1501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc756f3f9fe75f769b976d928342ecc40d0fbd6fbc60a435ca5eac4f8c80d55"
	Oct 09 18:50:09 addons-514774 kubelet[1501]: I1009 18:50:09.091579    1501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-699vs" podStartSLOduration=65.277324455 podStartE2EDuration="1m8.091558868s" podCreationTimestamp="2024-10-09 18:49:01 +0000 UTC" firstStartedPulling="2024-10-09 18:50:06.141284172 +0000 UTC m=+164.905861062" lastFinishedPulling="2024-10-09 18:50:08.955518568 +0000 UTC m=+167.720095475" observedRunningTime="2024-10-09 18:50:09.089911467 +0000 UTC m=+167.854488391" watchObservedRunningTime="2024-10-09 18:50:09.091558868 +0000 UTC m=+167.856135759"
	Oct 09 18:50:17 addons-514774 kubelet[1501]: I1009 18:50:17.338801    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d93c137-aabc-4e04-83d6-55467c56bdcc" path="/var/lib/kubelet/pods/5d93c137-aabc-4e04-83d6-55467c56bdcc/volumes"
	Oct 09 18:50:17 addons-514774 kubelet[1501]: I1009 18:50:17.339202    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b135775e-0e65-46bb-b0ba-d4714ce35a3b" path="/var/lib/kubelet/pods/b135775e-0e65-46bb-b0ba-d4714ce35a3b/volumes"
	Oct 09 18:50:21 addons-514774 kubelet[1501]: I1009 18:50:21.390332    1501 scope.go:117] "RemoveContainer" containerID="478f1c817ce335e4f9c6e6f0daf5427d2d0a818016392bf68e5a4ef26d124429"
	Oct 09 18:50:21 addons-514774 kubelet[1501]: I1009 18:50:21.399189    1501 scope.go:117] "RemoveContainer" containerID="710495ae2f4b412c823ec5ed92211b912beae124649229a2e21ad04ee14ed8ae"
	Oct 09 18:50:22 addons-514774 kubelet[1501]: I1009 18:50:22.336273    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b26r7" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:50:25 addons-514774 kubelet[1501]: I1009 18:50:25.335975    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-cbzc4" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:50:27 addons-514774 kubelet[1501]: I1009 18:50:27.339779    1501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a46c1e-05ec-4ae5-ad66-9de28be2ff47" path="/var/lib/kubelet/pods/e2a46c1e-05ec-4ae5-ad66-9de28be2ff47/volumes"
	Oct 09 18:50:55 addons-514774 kubelet[1501]: I1009 18:50:55.336533    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dthqj" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:51:21 addons-514774 kubelet[1501]: I1009 18:51:21.467889    1501 scope.go:117] "RemoveContainer" containerID="afcbc75e1fc7cf558036fa9aa1c0d197b8fd2540807a71eac1bd751948c4973b"
	Oct 09 18:51:28 addons-514774 kubelet[1501]: I1009 18:51:28.335423    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b26r7" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:51:49 addons-514774 kubelet[1501]: I1009 18:51:49.335801    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-cbzc4" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:52:08 addons-514774 kubelet[1501]: I1009 18:52:08.335894    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dthqj" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:52:58 addons-514774 kubelet[1501]: I1009 18:52:58.336065    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-b26r7" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:53:17 addons-514774 kubelet[1501]: I1009 18:53:17.336293    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-cbzc4" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:53:27 addons-514774 kubelet[1501]: I1009 18:53:27.335673    1501 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dthqj" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [f7d18090af72a666a1c5d60edc480331e3b45ea4c7312558d047dc2afa9de639] <==
	I1009 18:47:30.806561       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:47:30.838505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:47:30.838578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:47:30.853072       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:47:30.854114       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514774_1df96081-223f-4304-ad4e-649f8f410568!
	I1009 18:47:30.855504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2256d02c-a26d-4ef7-99ef-b288f70c694a", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514774_1df96081-223f-4304-ad4e-649f8f410568 became leader
	I1009 18:47:30.954998       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514774_1df96081-223f-4304-ad4e-649f8f410568!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-514774 -n addons-514774
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514774 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-6b6kk ingress-nginx-admission-patch-z8k9d test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-514774 describe pod ingress-nginx-admission-create-6b6kk ingress-nginx-admission-patch-z8k9d test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-514774 describe pod ingress-nginx-admission-create-6b6kk ingress-nginx-admission-patch-z8k9d test-job-nginx-0: exit status 1 (103.030588ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6b6kk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z8k9d" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-514774 describe pod ingress-nginx-admission-create-6b6kk ingress-nginx-admission-patch-z8k9d test-job-nginx-0: exit status 1
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable volcano --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable volcano --alsologtostderr -v=1: (11.15564631s)
--- FAIL: TestAddons/serial/Volcano (211.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-135957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1009 19:35:09.874417    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-135957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.96111765s)

                                                
                                                
-- stdout --
	* [old-k8s-version-135957] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-135957" primary control-plane node in "old-k8s-version-135957" cluster
	* Pulling base image v0.0.45-1728382586-19774 ...
	* Restarting existing docker container for "old-k8s-version-135957" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-135957 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:34:38.824896  212114 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:34:38.825158  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:34:38.825187  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:34:38.825217  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:34:38.825488  212114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:34:38.825995  212114 out.go:352] Setting JSON to false
	I1009 19:34:38.827172  212114 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4622,"bootTime":1728497857,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 19:34:38.827282  212114 start.go:139] virtualization:  
	I1009 19:34:38.830915  212114 out.go:177] * [old-k8s-version-135957] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 19:34:38.834334  212114 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:34:38.834396  212114 notify.go:220] Checking for updates...
	I1009 19:34:38.840239  212114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:34:38.842840  212114 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 19:34:38.845338  212114 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 19:34:38.847970  212114 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:34:38.850995  212114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:34:38.854044  212114 config.go:182] Loaded profile config "old-k8s-version-135957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1009 19:34:38.857281  212114 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 19:34:38.859851  212114 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:34:38.907551  212114 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:34:38.907767  212114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:34:39.002594  212114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:34:38.988911254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:34:39.002721  212114 docker.go:318] overlay module found
	I1009 19:34:39.005460  212114 out.go:177] * Using the docker driver based on existing profile
	I1009 19:34:39.008025  212114 start.go:297] selected driver: docker
	I1009 19:34:39.008043  212114 start.go:901] validating driver "docker" against &{Name:old-k8s-version-135957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-135957 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:39.008173  212114 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:34:39.008873  212114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:34:39.117217  212114 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:34:39.082793792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:34:39.117648  212114 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:34:39.117677  212114 cni.go:84] Creating CNI manager for ""
	I1009 19:34:39.117729  212114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 19:34:39.117771  212114 start.go:340] cluster config:
	{Name:old-k8s-version-135957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-135957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:39.120854  212114 out.go:177] * Starting "old-k8s-version-135957" primary control-plane node in "old-k8s-version-135957" cluster
	I1009 19:34:39.123484  212114 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1009 19:34:39.126122  212114 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 19:34:39.128750  212114 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1009 19:34:39.128806  212114 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1009 19:34:39.128816  212114 cache.go:56] Caching tarball of preloaded images
	I1009 19:34:39.128902  212114 preload.go:172] Found /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 19:34:39.128912  212114 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1009 19:34:39.129035  212114 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/config.json ...
	I1009 19:34:39.129299  212114 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 19:34:39.149922  212114 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1009 19:34:39.149940  212114 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1009 19:34:39.149953  212114 cache.go:194] Successfully downloaded all kic artifacts
	I1009 19:34:39.149985  212114 start.go:360] acquireMachinesLock for old-k8s-version-135957: {Name:mk2236540877b98c3a6f3f4fce961e1e3f6adc1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:34:39.150037  212114 start.go:364] duration metric: took 33.362µs to acquireMachinesLock for "old-k8s-version-135957"
	I1009 19:34:39.150055  212114 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:34:39.150061  212114 fix.go:54] fixHost starting: 
	I1009 19:34:39.150335  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:39.167509  212114 fix.go:112] recreateIfNeeded on old-k8s-version-135957: state=Stopped err=<nil>
	W1009 19:34:39.167535  212114 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:34:39.170390  212114 out.go:177] * Restarting existing docker container for "old-k8s-version-135957" ...
	I1009 19:34:39.176781  212114 cli_runner.go:164] Run: docker start old-k8s-version-135957
	I1009 19:34:39.606873  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:39.635617  212114 kic.go:430] container "old-k8s-version-135957" state is running.
	I1009 19:34:39.635993  212114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135957
	I1009 19:34:39.671098  212114 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/config.json ...
	I1009 19:34:39.671351  212114 machine.go:93] provisionDockerMachine start ...
	I1009 19:34:39.671424  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:39.714340  212114 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:39.714612  212114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1009 19:34:39.714627  212114 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:34:39.715252  212114 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36186->127.0.0.1:33063: read: connection reset by peer
	I1009 19:34:42.852063  212114 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135957
	
	I1009 19:34:42.852087  212114 ubuntu.go:169] provisioning hostname "old-k8s-version-135957"
	I1009 19:34:42.852154  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:42.870448  212114 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:42.871536  212114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1009 19:34:42.871560  212114 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-135957 && echo "old-k8s-version-135957" | sudo tee /etc/hostname
	I1009 19:34:43.021362  212114 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-135957
	
	I1009 19:34:43.021563  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:43.039374  212114 main.go:141] libmachine: Using SSH client type: native
	I1009 19:34:43.040894  212114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1009 19:34:43.040924  212114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-135957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-135957/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-135957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:34:43.172830  212114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:34:43.172855  212114 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-2290/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-2290/.minikube}
	I1009 19:34:43.172877  212114 ubuntu.go:177] setting up certificates
	I1009 19:34:43.172886  212114 provision.go:84] configureAuth start
	I1009 19:34:43.172943  212114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135957
	I1009 19:34:43.192083  212114 provision.go:143] copyHostCerts
	I1009 19:34:43.192141  212114 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem, removing ...
	I1009 19:34:43.192149  212114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem
	I1009 19:34:43.192222  212114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem (1078 bytes)
	I1009 19:34:43.192326  212114 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem, removing ...
	I1009 19:34:43.192331  212114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem
	I1009 19:34:43.192357  212114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem (1123 bytes)
	I1009 19:34:43.192416  212114 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem, removing ...
	I1009 19:34:43.192420  212114 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem
	I1009 19:34:43.192443  212114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem (1679 bytes)
	I1009 19:34:43.192498  212114 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-135957 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-135957]
	I1009 19:34:44.009452  212114 provision.go:177] copyRemoteCerts
	I1009 19:34:44.009592  212114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:34:44.009660  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:44.027364  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:44.122220  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:34:44.147437  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 19:34:44.174172  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:34:44.198790  212114 provision.go:87] duration metric: took 1.02589123s to configureAuth
	I1009 19:34:44.198821  212114 ubuntu.go:193] setting minikube options for container-runtime
	I1009 19:34:44.199020  212114 config.go:182] Loaded profile config "old-k8s-version-135957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1009 19:34:44.199033  212114 machine.go:96] duration metric: took 4.527665607s to provisionDockerMachine
	I1009 19:34:44.199042  212114 start.go:293] postStartSetup for "old-k8s-version-135957" (driver="docker")
	I1009 19:34:44.199057  212114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:34:44.199117  212114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:34:44.199163  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:44.216046  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:44.309502  212114 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:34:44.312455  212114 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:34:44.312488  212114 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 19:34:44.312499  212114 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 19:34:44.312509  212114 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 19:34:44.312519  212114 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/addons for local assets ...
	I1009 19:34:44.312578  212114 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/files for local assets ...
	I1009 19:34:44.312710  212114 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem -> 75962.pem in /etc/ssl/certs
	I1009 19:34:44.312843  212114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:34:44.323252  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem --> /etc/ssl/certs/75962.pem (1708 bytes)
	I1009 19:34:44.347818  212114 start.go:296] duration metric: took 148.758044ms for postStartSetup
	I1009 19:34:44.347920  212114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:34:44.347968  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:44.366159  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:44.457436  212114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:34:44.461992  212114 fix.go:56] duration metric: took 5.311924827s for fixHost
	I1009 19:34:44.462019  212114 start.go:83] releasing machines lock for "old-k8s-version-135957", held for 5.311974877s
	I1009 19:34:44.462095  212114 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-135957
	I1009 19:34:44.479858  212114 ssh_runner.go:195] Run: cat /version.json
	I1009 19:34:44.479921  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:44.479981  212114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:34:44.480043  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:44.506352  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:44.519004  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:44.741022  212114 ssh_runner.go:195] Run: systemctl --version
	I1009 19:34:44.745590  212114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:34:44.749827  212114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 19:34:44.768688  212114 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 19:34:44.768775  212114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:34:44.779363  212114 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:34:44.779401  212114 start.go:495] detecting cgroup driver to use...
	I1009 19:34:44.779432  212114 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:34:44.779483  212114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 19:34:44.794142  212114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 19:34:44.805817  212114 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:34:44.805897  212114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:34:44.820731  212114 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:34:44.833785  212114 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:34:44.927874  212114 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:34:45.026675  212114 docker.go:233] disabling docker service ...
	I1009 19:34:45.026761  212114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:34:45.045240  212114 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:34:45.061211  212114 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:34:45.183174  212114 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:34:45.295743  212114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:34:45.309686  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:34:45.333583  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1009 19:34:45.344593  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 19:34:45.356694  212114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 19:34:45.356777  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 19:34:45.367813  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 19:34:45.380486  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 19:34:45.391115  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 19:34:45.401316  212114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:34:45.410486  212114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 19:34:45.421874  212114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:34:45.430979  212114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:34:45.440107  212114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:34:45.546320  212114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 19:34:45.730395  212114 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 19:34:45.730473  212114 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 19:34:45.734504  212114 start.go:563] Will wait 60s for crictl version
	I1009 19:34:45.734581  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:34:45.738380  212114 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:34:45.789088  212114 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1009 19:34:45.789260  212114 ssh_runner.go:195] Run: containerd --version
	I1009 19:34:45.812949  212114 ssh_runner.go:195] Run: containerd --version
	I1009 19:34:45.843149  212114 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1009 19:34:45.845826  212114 cli_runner.go:164] Run: docker network inspect old-k8s-version-135957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:34:45.862065  212114 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1009 19:34:45.866055  212114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:34:45.877349  212114 kubeadm.go:883] updating cluster {Name:old-k8s-version-135957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-135957 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:34:45.877466  212114 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1009 19:34:45.877523  212114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:34:45.916633  212114 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 19:34:45.916706  212114 containerd.go:534] Images already preloaded, skipping extraction
	I1009 19:34:45.916766  212114 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:34:45.952768  212114 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 19:34:45.952791  212114 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:34:45.952799  212114 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1009 19:34:45.952913  212114 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-135957 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-135957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:34:45.952980  212114 ssh_runner.go:195] Run: sudo crictl info
	I1009 19:34:45.998701  212114 cni.go:84] Creating CNI manager for ""
	I1009 19:34:45.998777  212114 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 19:34:45.998799  212114 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:34:45.998856  212114 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-135957 NodeName:old-k8s-version-135957 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 19:34:45.999021  212114 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-135957"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:34:45.999115  212114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 19:34:46.008631  212114 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:34:46.008725  212114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:34:46.018997  212114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1009 19:34:46.052758  212114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:34:46.077413  212114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1009 19:34:46.103283  212114 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:34:46.107746  212114 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:34:46.120143  212114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:34:46.217936  212114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:34:46.236988  212114 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957 for IP: 192.168.76.2
	I1009 19:34:46.237007  212114 certs.go:194] generating shared ca certs ...
	I1009 19:34:46.237024  212114 certs.go:226] acquiring lock for ca certs: {Name:mke6990d9a3fb276a87991bc9cbf7d64b4192c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:46.237163  212114 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key
	I1009 19:34:46.237218  212114 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key
	I1009 19:34:46.237231  212114 certs.go:256] generating profile certs ...
	I1009 19:34:46.237316  212114 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.key
	I1009 19:34:46.237387  212114 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/apiserver.key.54c639ff
	I1009 19:34:46.237435  212114 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/proxy-client.key
	I1009 19:34:46.237555  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596.pem (1338 bytes)
	W1009 19:34:46.237593  212114 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596_empty.pem, impossibly tiny 0 bytes
	I1009 19:34:46.237604  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:34:46.237630  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:34:46.237657  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:34:46.237697  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem (1679 bytes)
	I1009 19:34:46.237749  212114 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem (1708 bytes)
	I1009 19:34:46.238457  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:34:46.266829  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:34:46.299498  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:34:46.331364  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:34:46.358901  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 19:34:46.392073  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:34:46.416427  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:34:46.444209  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:34:46.470712  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem --> /usr/share/ca-certificates/75962.pem (1708 bytes)
	I1009 19:34:46.498396  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:34:46.528861  212114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596.pem --> /usr/share/ca-certificates/7596.pem (1338 bytes)
	I1009 19:34:46.556502  212114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:34:46.584443  212114 ssh_runner.go:195] Run: openssl version
	I1009 19:34:46.595794  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75962.pem && ln -fs /usr/share/ca-certificates/75962.pem /etc/ssl/certs/75962.pem"
	I1009 19:34:46.605908  212114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75962.pem
	I1009 19:34:46.609705  212114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:57 /usr/share/ca-certificates/75962.pem
	I1009 19:34:46.609800  212114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75962.pem
	I1009 19:34:46.616801  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75962.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:34:46.626610  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:34:46.636315  212114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:46.639941  212114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:46.640053  212114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:34:46.646928  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:34:46.656172  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7596.pem && ln -fs /usr/share/ca-certificates/7596.pem /etc/ssl/certs/7596.pem"
	I1009 19:34:46.667050  212114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7596.pem
	I1009 19:34:46.671955  212114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:57 /usr/share/ca-certificates/7596.pem
	I1009 19:34:46.672054  212114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7596.pem
	I1009 19:34:46.679292  212114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7596.pem /etc/ssl/certs/51391683.0"
	I1009 19:34:46.688332  212114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:34:46.691840  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:34:46.698698  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:34:46.705495  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:34:46.712075  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:34:46.718835  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:34:46.726287  212114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:34:46.734304  212114 kubeadm.go:392] StartCluster: {Name:old-k8s-version-135957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-135957 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:34:46.734424  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 19:34:46.734517  212114 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:34:46.781960  212114 cri.go:89] found id: "84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:34:46.781996  212114 cri.go:89] found id: "c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:34:46.782003  212114 cri.go:89] found id: "9d9a8b7cd2768bc41f3938d3f14efd940b216c4e8b9c194ba9c196dcca4a9bf1"
	I1009 19:34:46.782007  212114 cri.go:89] found id: "e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:34:46.782032  212114 cri.go:89] found id: "d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:34:46.782040  212114 cri.go:89] found id: "187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:34:46.782044  212114 cri.go:89] found id: "1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:34:46.782047  212114 cri.go:89] found id: "4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:34:46.782050  212114 cri.go:89] found id: ""
	I1009 19:34:46.782123  212114 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1009 19:34:46.800985  212114 cri.go:116] JSON = null
	W1009 19:34:46.801052  212114 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1009 19:34:46.801143  212114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:34:46.811044  212114 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 19:34:46.811118  212114 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 19:34:46.811195  212114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:34:46.829474  212114 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:34:46.830011  212114 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-135957" does not appear in /home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 19:34:46.830195  212114 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-2290/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-135957" cluster setting kubeconfig missing "old-k8s-version-135957" context setting]
	I1009 19:34:46.830601  212114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/kubeconfig: {Name:mk88e77ecd1f863276e8fadf431093322057a8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:46.832218  212114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:34:46.846178  212114 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1009 19:34:46.846262  212114 kubeadm.go:597] duration metric: took 35.122596ms to restartPrimaryControlPlane
	I1009 19:34:46.846285  212114 kubeadm.go:394] duration metric: took 111.990312ms to StartCluster
	I1009 19:34:46.846313  212114 settings.go:142] acquiring lock: {Name:mkf94bbff2baa0ab7fd6f65328728d4b59af8d85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:46.846411  212114 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 19:34:46.847127  212114 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/kubeconfig: {Name:mk88e77ecd1f863276e8fadf431093322057a8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:34:46.847400  212114 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 19:34:46.847800  212114 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:34:46.847930  212114 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-135957"
	I1009 19:34:46.847953  212114 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-135957"
	W1009 19:34:46.847960  212114 addons.go:243] addon storage-provisioner should already be in state true
	I1009 19:34:46.847992  212114 host.go:66] Checking if "old-k8s-version-135957" exists ...
	I1009 19:34:46.848033  212114 addons.go:69] Setting dashboard=true in profile "old-k8s-version-135957"
	I1009 19:34:46.848062  212114 addons.go:234] Setting addon dashboard=true in "old-k8s-version-135957"
	W1009 19:34:46.848099  212114 addons.go:243] addon dashboard should already be in state true
	I1009 19:34:46.848135  212114 host.go:66] Checking if "old-k8s-version-135957" exists ...
	I1009 19:34:46.848462  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:46.848884  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:46.847860  212114 config.go:182] Loaded profile config "old-k8s-version-135957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1009 19:34:46.852662  212114 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-135957"
	I1009 19:34:46.852690  212114 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-135957"
	W1009 19:34:46.852698  212114 addons.go:243] addon metrics-server should already be in state true
	I1009 19:34:46.852731  212114 host.go:66] Checking if "old-k8s-version-135957" exists ...
	I1009 19:34:46.852967  212114 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-135957"
	I1009 19:34:46.853045  212114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-135957"
	I1009 19:34:46.853271  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:46.853922  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:46.854281  212114 out.go:177] * Verifying Kubernetes components...
	I1009 19:34:46.857226  212114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:34:46.911524  212114 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:34:46.914317  212114 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:34:46.914339  212114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:34:46.914420  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:46.921567  212114 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 19:34:46.924800  212114 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 19:34:46.924827  212114 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 19:34:46.924895  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:46.932717  212114 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1009 19:34:46.935741  212114 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1009 19:34:46.938655  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1009 19:34:46.938679  212114 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1009 19:34:46.938750  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:46.948490  212114 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-135957"
	W1009 19:34:46.948515  212114 addons.go:243] addon default-storageclass should already be in state true
	I1009 19:34:46.948540  212114 host.go:66] Checking if "old-k8s-version-135957" exists ...
	I1009 19:34:46.949009  212114 cli_runner.go:164] Run: docker container inspect old-k8s-version-135957 --format={{.State.Status}}
	I1009 19:34:46.996938  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:47.005024  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:47.011131  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:47.014704  212114 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:34:47.014725  212114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:34:47.014789  212114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-135957
	I1009 19:34:47.043307  212114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/old-k8s-version-135957/id_rsa Username:docker}
	I1009 19:34:47.116756  212114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:34:47.172133  212114 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-135957" to be "Ready" ...
	I1009 19:34:47.236536  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1009 19:34:47.236608  212114 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1009 19:34:47.259084  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:34:47.271183  212114 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 19:34:47.271252  212114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 19:34:47.283754  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1009 19:34:47.283826  212114 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1009 19:34:47.289765  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:34:47.317187  212114 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 19:34:47.317264  212114 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 19:34:47.342379  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1009 19:34:47.342523  212114 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1009 19:34:47.364364  212114 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:34:47.364442  212114 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 19:34:47.400758  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:34:47.425465  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1009 19:34:47.425537  212114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1009 19:34:47.463411  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.463508  212114 retry.go:31] will retry after 139.499051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:47.512292  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.512375  212114 retry.go:31] will retry after 266.723085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.528517  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1009 19:34:47.528601  212114 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1009 19:34:47.592971  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.593048  212114 retry.go:31] will retry after 137.850195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.597785  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1009 19:34:47.597852  212114 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1009 19:34:47.604048  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:34:47.625412  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1009 19:34:47.625493  212114 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1009 19:34:47.673486  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1009 19:34:47.673505  212114 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1009 19:34:47.732232  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:34:47.743310  212114 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:34:47.743331  212114 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1009 19:34:47.780030  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:34:47.872167  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:34:47.924243  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:47.924273  212114 retry.go:31] will retry after 227.608527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:48.047832  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.047865  212114 retry.go:31] will retry after 384.361719ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:48.114785  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.114817  212114 retry.go:31] will retry after 314.128417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.152061  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:48.202914  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.202996  212114 retry.go:31] will retry after 202.419151ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:48.315466  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.315496  212114 retry.go:31] will retry after 410.438566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.405859  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:34:48.429263  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:34:48.432569  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1009 19:34:48.609970  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.610001  212114 retry.go:31] will retry after 284.565711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.726665  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:48.760102  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.760131  212114 retry.go:31] will retry after 805.81663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:48.771788  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.771893  212114 retry.go:31] will retry after 592.703604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:48.878317  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.878407  212114 retry.go:31] will retry after 926.390912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:48.895594  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:34:49.015844  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.015874  212114 retry.go:31] will retry after 430.309237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.173360  212114 node_ready.go:53] error getting node "old-k8s-version-135957": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-135957": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 19:34:49.364824  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:34:49.447244  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:34:49.545128  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.545160  212114 retry.go:31] will retry after 489.616691ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.566482  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:34:49.655213  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.655299  212114 retry.go:31] will retry after 1.241653882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:49.729626  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.729666  212114 retry.go:31] will retry after 874.082366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.804949  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:49.925300  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:49.925329  212114 retry.go:31] will retry after 956.334378ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.035573  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1009 19:34:50.161633  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.161663  212114 retry.go:31] will retry after 653.534014ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.604305  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:34:50.702037  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.702067  212114 retry.go:31] will retry after 1.765692181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.815388  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:34:50.882740  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:50.895133  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.895240  212114 retry.go:31] will retry after 2.524115187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.897440  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:34:50.987091  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:50.987176  212114 retry.go:31] will retry after 1.119374933s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:51.003517  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:51.003598  212114 retry.go:31] will retry after 1.668770369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:51.672897  212114 node_ready.go:53] error getting node "old-k8s-version-135957": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-135957": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 19:34:52.107450  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:52.193587  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:52.193618  212114 retry.go:31] will retry after 2.973156936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:52.467992  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:34:52.543758  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:52.543792  212114 retry.go:31] will retry after 1.955838845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:52.673052  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1009 19:34:52.770888  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:52.770935  212114 retry.go:31] will retry after 2.658454725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:53.419606  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1009 19:34:53.490077  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:53.490107  212114 retry.go:31] will retry after 1.953527339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:53.673709  212114 node_ready.go:53] error getting node "old-k8s-version-135957": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-135957": dial tcp 192.168.76.2:8443: connect: connection refused
	I1009 19:34:54.500535  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:34:54.577602  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:54.577635  212114 retry.go:31] will retry after 2.888783758s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:55.167879  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:34:55.271762  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:55.271797  212114 retry.go:31] will retry after 5.209963544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:55.430094  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:34:55.444473  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1009 19:34:55.660046  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:55.660091  212114 retry.go:31] will retry after 4.07372593s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1009 19:34:55.742773  212114 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:55.742814  212114 retry.go:31] will retry after 6.05131176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1009 19:34:57.467363  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:34:59.734246  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1009 19:35:00.482173  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:35:01.794270  212114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 19:35:03.145358  212114 node_ready.go:49] node "old-k8s-version-135957" has status "Ready":"True"
	I1009 19:35:03.145441  212114 node_ready.go:38] duration metric: took 15.973101039s for node "old-k8s-version-135957" to be "Ready" ...
	I1009 19:35:03.145469  212114 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:35:03.331109  212114 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-txr44" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:03.422519  212114 pod_ready.go:93] pod "coredns-74ff55c5b-txr44" in "kube-system" namespace has status "Ready":"True"
	I1009 19:35:03.422559  212114 pod_ready.go:82] duration metric: took 91.403457ms for pod "coredns-74ff55c5b-txr44" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:03.422576  212114 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:03.452947  212114 pod_ready.go:93] pod "etcd-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"True"
	I1009 19:35:03.452976  212114 pod_ready.go:82] duration metric: took 30.392533ms for pod "etcd-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:03.453004  212114 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:04.130343  212114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.66293545s)
	I1009 19:35:04.384317  212114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.902106009s)
	I1009 19:35:04.384564  212114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.590267312s)
	I1009 19:35:04.384579  212114 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-135957"
	I1009 19:35:04.384858  212114 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.650571893s)
	I1009 19:35:04.391037  212114 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-135957 addons enable metrics-server
	
	I1009 19:35:04.398048  212114 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1009 19:35:04.403665  212114 addons.go:510] duration metric: took 17.55586602s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1009 19:35:05.460363  212114 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:07.959110  212114 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:09.959290  212114 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:10.459413  212114 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"True"
	I1009 19:35:10.459449  212114 pod_ready.go:82] duration metric: took 7.006434734s for pod "kube-apiserver-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:10.459462  212114 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:35:12.465922  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:14.466766  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:16.966605  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:18.968826  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:21.465826  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:23.466520  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:25.467116  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:27.474872  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:30.010354  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:32.466074  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:34.466453  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:36.965976  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:38.966577  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:41.466744  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:43.966276  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:45.967502  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:47.970619  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:50.466689  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:52.965830  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:54.967028  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:56.967317  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:35:59.465959  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:01.466542  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:03.475543  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:05.966082  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:07.966953  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:09.967729  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:12.465358  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:14.465701  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:16.466029  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:18.468818  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:20.473427  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:22.965563  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:24.966658  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:27.465944  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:29.966658  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:32.465892  212114 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:32.966211  212114 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"True"
	I1009 19:36:32.966236  212114 pod_ready.go:82] duration metric: took 1m22.506766871s for pod "kube-controller-manager-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:32.966247  212114 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-whqjp" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:32.971113  212114 pod_ready.go:93] pod "kube-proxy-whqjp" in "kube-system" namespace has status "Ready":"True"
	I1009 19:36:32.971137  212114 pod_ready.go:82] duration metric: took 4.88284ms for pod "kube-proxy-whqjp" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:32.971149  212114 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:32.976121  212114 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-135957" in "kube-system" namespace has status "Ready":"True"
	I1009 19:36:32.976146  212114 pod_ready.go:82] duration metric: took 4.990071ms for pod "kube-scheduler-old-k8s-version-135957" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:32.976161  212114 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace to be "Ready" ...
	I1009 19:36:34.983013  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:36.983310  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:39.481985  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:41.482347  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:43.482830  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:45.982535  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:48.482688  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:50.983582  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:53.483128  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:55.982674  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:36:58.482852  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:00.483305  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:02.483460  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:04.982053  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:06.982923  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:09.482950  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:11.982032  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:13.982504  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:16.482515  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:18.482904  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:20.982286  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:23.481655  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:25.482400  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:27.983483  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:30.481842  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:32.482228  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:34.981881  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:36.983064  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:39.482979  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:41.982395  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:43.983182  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:45.985047  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:48.483002  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:50.982728  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:53.482282  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:55.982437  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:37:58.482599  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:00.984574  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:03.481827  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:05.482673  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:07.983340  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:10.482490  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:12.981861  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:14.981979  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:16.982490  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:18.982570  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:20.982816  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:23.482616  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:25.982556  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:28.481950  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:30.482563  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:32.483018  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:34.982127  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:36.982199  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:39.482903  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:41.982942  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:43.983027  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:46.482110  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:48.981592  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:50.983003  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:53.482477  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:55.482900  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:57.982613  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:38:59.983145  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:02.483201  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:04.484390  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:06.982213  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:09.482361  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:11.982789  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:14.482182  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:16.483677  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:18.982036  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:20.984014  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:23.482632  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:25.987516  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:28.491126  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:30.982136  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:33.482504  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:35.983022  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:37.983910  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:40.483859  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:42.982196  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:44.982447  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:47.482219  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:49.982340  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:51.982896  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:54.482386  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:56.482713  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:39:58.983853  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:01.487091  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:03.982593  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:05.984262  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:08.481984  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:10.482535  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:12.981573  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:14.983311  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:16.983856  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:18.986323  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:21.484022  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:23.983697  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:26.483244  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:28.484264  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:30.985198  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:32.983535  212114 pod_ready.go:82] duration metric: took 4m0.007360452s for pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace to be "Ready" ...
	E1009 19:40:32.983561  212114 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 19:40:32.983570  212114 pod_ready.go:39] duration metric: took 5m29.838077957s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:40:32.983586  212114 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:40:32.983616  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:40:32.983677  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:40:33.049530  212114 cri.go:89] found id: "a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:33.049553  212114 cri.go:89] found id: "4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:33.049558  212114 cri.go:89] found id: ""
	I1009 19:40:33.049566  212114 logs.go:282] 2 containers: [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797]
	I1009 19:40:33.049621  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.062880  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.069016  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1009 19:40:33.069102  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:40:33.148317  212114 cri.go:89] found id: "5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:33.148337  212114 cri.go:89] found id: "1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:33.148342  212114 cri.go:89] found id: ""
	I1009 19:40:33.148356  212114 logs.go:282] 2 containers: [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f]
	I1009 19:40:33.148419  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.152544  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.158280  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1009 19:40:33.158385  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:40:33.250632  212114 cri.go:89] found id: "63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:33.250656  212114 cri.go:89] found id: "84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:33.250662  212114 cri.go:89] found id: ""
	I1009 19:40:33.250669  212114 logs.go:282] 2 containers: [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4]
	I1009 19:40:33.250761  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.254707  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.258160  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:40:33.258276  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:40:33.329677  212114 cri.go:89] found id: "855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:33.329698  212114 cri.go:89] found id: "187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:33.329703  212114 cri.go:89] found id: ""
	I1009 19:40:33.329711  212114 logs.go:282] 2 containers: [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e]
	I1009 19:40:33.329770  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.335537  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.341688  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:40:33.341770  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:40:33.409335  212114 cri.go:89] found id: "4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:33.409361  212114 cri.go:89] found id: "e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:33.409365  212114 cri.go:89] found id: ""
	I1009 19:40:33.409372  212114 logs.go:282] 2 containers: [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e]
	I1009 19:40:33.409428  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.414175  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.418573  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:40:33.418657  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:40:33.480197  212114 cri.go:89] found id: "a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:33.480228  212114 cri.go:89] found id: "d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:33.480232  212114 cri.go:89] found id: ""
	I1009 19:40:33.480240  212114 logs.go:282] 2 containers: [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4]
	I1009 19:40:33.480322  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.484759  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.489087  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1009 19:40:33.489206  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:40:33.557450  212114 cri.go:89] found id: "c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:33.557473  212114 cri.go:89] found id: "c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:33.557479  212114 cri.go:89] found id: ""
	I1009 19:40:33.557685  212114 logs.go:282] 2 containers: [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c]
	I1009 19:40:33.557763  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.564818  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.574510  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 19:40:33.574618  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 19:40:33.671672  212114 cri.go:89] found id: "ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:33.671726  212114 cri.go:89] found id: ""
	I1009 19:40:33.671736  212114 logs.go:282] 1 containers: [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60]
	I1009 19:40:33.671806  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.677087  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1009 19:40:33.677159  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 19:40:33.763257  212114 cri.go:89] found id: "70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:33.763285  212114 cri.go:89] found id: "932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:33.763291  212114 cri.go:89] found id: ""
	I1009 19:40:33.763298  212114 logs.go:282] 2 containers: [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f]
	I1009 19:40:33.763356  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.767028  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.770406  212114 logs.go:123] Gathering logs for etcd [1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f] ...
	I1009 19:40:33.770425  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:33.826020  212114 logs.go:123] Gathering logs for coredns [84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4] ...
	I1009 19:40:33.826064  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:33.879106  212114 logs.go:123] Gathering logs for kube-scheduler [187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e] ...
	I1009 19:40:33.879135  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:33.930392  212114 logs.go:123] Gathering logs for containerd ...
	I1009 19:40:33.930428  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1009 19:40:34.004770  212114 logs.go:123] Gathering logs for container status ...
	I1009 19:40:34.004799  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:40:34.056516  212114 logs.go:123] Gathering logs for kubelet ...
	I1009 19:40:34.056599  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 19:40:34.117909  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141088     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-l28zw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l28zw" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118146  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141226     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118362  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141301     655 reflector.go:138] object-"kube-system"/"kindnet-token-ch425": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ch425" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118562  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141420     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118773  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141526     655 reflector.go:138] object-"kube-system"/"coredns-token-x8mx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-x8mx9" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118996  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.145947     655 reflector.go:138] object-"kube-system"/"metrics-server-token-n5d8s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-n5d8s" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.119211  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146058     655 reflector.go:138] object-"default"/"default-token-dtkng": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-dtkng" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.119436  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146124     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8fzd6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8fzd6" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.127887  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.568696     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.129333  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.908103     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.133313  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:18 old-k8s-version-135957 kubelet[655]: E1009 19:35:18.807162     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.135414  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:27 old-k8s-version-135957 kubelet[655]: E1009 19:35:27.017732     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.135765  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:28 old-k8s-version-135957 kubelet[655]: E1009 19:35:28.030088     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.135969  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:29 old-k8s-version-135957 kubelet[655]: E1009 19:35:29.798736     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.136376  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:34 old-k8s-version-135957 kubelet[655]: E1009 19:35:34.224840     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.137166  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:36 old-k8s-version-135957 kubelet[655]: E1009 19:35:36.070564     655 pod_workers.go:191] Error syncing pod dbfd3538-0cb4-4cf0-b208-e18c725f6d5d ("storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"
	W1009 19:40:34.139585  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:40 old-k8s-version-135957 kubelet[655]: E1009 19:35:40.810507     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.140516  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:48 old-k8s-version-135957 kubelet[655]: E1009 19:35:48.113466     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.140979  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.225991     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.141166  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.805911     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.141347  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:07 old-k8s-version-135957 kubelet[655]: E1009 19:36:07.794321     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.141929  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:10 old-k8s-version-135957 kubelet[655]: E1009 19:36:10.183835     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.142254  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:14 old-k8s-version-135957 kubelet[655]: E1009 19:36:14.226874     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.142436  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:20 old-k8s-version-135957 kubelet[655]: E1009 19:36:20.794329     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.142759  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:29 old-k8s-version-135957 kubelet[655]: E1009 19:36:29.793939     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.145371  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:33 old-k8s-version-135957 kubelet[655]: E1009 19:36:33.803424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.145703  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:41 old-k8s-version-135957 kubelet[655]: E1009 19:36:41.794984     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.145888  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:44 old-k8s-version-135957 kubelet[655]: E1009 19:36:44.794752     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.146482  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.299505     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.146665  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.794264     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.146989  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:04 old-k8s-version-135957 kubelet[655]: E1009 19:37:04.225263     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.147171  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:10 old-k8s-version-135957 kubelet[655]: E1009 19:37:10.794424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.147494  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:19 old-k8s-version-135957 kubelet[655]: E1009 19:37:19.794014     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.147679  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:25 old-k8s-version-135957 kubelet[655]: E1009 19:37:25.794260     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.148013  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:33 old-k8s-version-135957 kubelet[655]: E1009 19:37:33.794463     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.148203  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:36 old-k8s-version-135957 kubelet[655]: E1009 19:37:36.794403     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.148534  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:46 old-k8s-version-135957 kubelet[655]: E1009 19:37:46.794633     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.148729  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:48 old-k8s-version-135957 kubelet[655]: E1009 19:37:48.796873     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.149055  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:59 old-k8s-version-135957 kubelet[655]: E1009 19:37:59.794479     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.151483  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:03 old-k8s-version-135957 kubelet[655]: E1009 19:38:03.802412     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.152171  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:12 old-k8s-version-135957 kubelet[655]: E1009 19:38:12.799928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.152410  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:15 old-k8s-version-135957 kubelet[655]: E1009 19:38:15.794598     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.153011  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:27 old-k8s-version-135957 kubelet[655]: E1009 19:38:27.530323     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.153194  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:28 old-k8s-version-135957 kubelet[655]: E1009 19:38:28.795022     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.153519  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:34 old-k8s-version-135957 kubelet[655]: E1009 19:38:34.226016     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.153707  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:39 old-k8s-version-135957 kubelet[655]: E1009 19:38:39.794229     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154038  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:49 old-k8s-version-135957 kubelet[655]: E1009 19:38:49.795059     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.154225  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:51 old-k8s-version-135957 kubelet[655]: E1009 19:38:51.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154565  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.793928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.154754  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.795042     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154941  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:16 old-k8s-version-135957 kubelet[655]: E1009 19:39:16.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.155340  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:18 old-k8s-version-135957 kubelet[655]: E1009 19:39:18.794102     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.155552  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:27 old-k8s-version-135957 kubelet[655]: E1009 19:39:27.794317     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.155882  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: E1009 19:39:30.794130     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156066  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:42 old-k8s-version-135957 kubelet[655]: E1009 19:39:42.794279     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.156436  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: E1009 19:39:43.794072     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156775  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.806354     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156959  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.812240     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157144  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157467  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.157651  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157977  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.158159  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:34.158169  212114 logs.go:123] Gathering logs for kube-apiserver [4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797] ...
	I1009 19:40:34.158184  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:34.267905  212114 logs.go:123] Gathering logs for etcd [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a] ...
	I1009 19:40:34.267941  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:34.313520  212114 logs.go:123] Gathering logs for kube-proxy [e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e] ...
	I1009 19:40:34.313547  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:34.369410  212114 logs.go:123] Gathering logs for kube-controller-manager [d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4] ...
	I1009 19:40:34.369434  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:34.435867  212114 logs.go:123] Gathering logs for coredns [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33] ...
	I1009 19:40:34.435938  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:34.477407  212114 logs.go:123] Gathering logs for kube-scheduler [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd] ...
	I1009 19:40:34.477432  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:34.525043  212114 logs.go:123] Gathering logs for kube-proxy [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454] ...
	I1009 19:40:34.525071  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:34.579092  212114 logs.go:123] Gathering logs for kindnet [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9] ...
	I1009 19:40:34.579120  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:34.645088  212114 logs.go:123] Gathering logs for kindnet [c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c] ...
	I1009 19:40:34.645120  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:34.688041  212114 logs.go:123] Gathering logs for kubernetes-dashboard [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60] ...
	I1009 19:40:34.688076  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:34.750518  212114 logs.go:123] Gathering logs for storage-provisioner [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c] ...
	I1009 19:40:34.750548  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:34.792366  212114 logs.go:123] Gathering logs for storage-provisioner [932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f] ...
	I1009 19:40:34.792401  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:34.844432  212114 logs.go:123] Gathering logs for dmesg ...
	I1009 19:40:34.844506  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:40:34.862899  212114 logs.go:123] Gathering logs for kube-apiserver [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880] ...
	I1009 19:40:34.862973  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:34.966604  212114 logs.go:123] Gathering logs for kube-controller-manager [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184] ...
	I1009 19:40:34.966676  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:35.078882  212114 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:40:35.078925  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 19:40:35.317341  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:35.317406  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 19:40:35.317493  212114 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1009 19:40:35.317536  212114 out.go:270]   Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:35.317592  212114 out.go:270]   Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	  Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:35.317630  212114 out.go:270]   Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:35.317662  212114 out.go:270]   Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	  Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:35.317702  212114 out.go:270]   Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:35.317738  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:35.317758  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:45.319566  212114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:40:45.334167  212114 api_server.go:72] duration metric: took 5m58.486701724s to wait for apiserver process to appear ...
	I1009 19:40:45.334193  212114 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:40:45.334230  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:40:45.334291  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:40:45.404349  212114 cri.go:89] found id: "a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:45.404374  212114 cri.go:89] found id: "4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:45.404379  212114 cri.go:89] found id: ""
	I1009 19:40:45.404394  212114 logs.go:282] 2 containers: [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797]
	I1009 19:40:45.404454  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.408983  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.413035  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1009 19:40:45.413099  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:40:45.466616  212114 cri.go:89] found id: "5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:45.466636  212114 cri.go:89] found id: "1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:45.466641  212114 cri.go:89] found id: ""
	I1009 19:40:45.466651  212114 logs.go:282] 2 containers: [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f]
	I1009 19:40:45.466707  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.470602  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.474342  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1009 19:40:45.474415  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:40:45.536508  212114 cri.go:89] found id: "63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:45.536533  212114 cri.go:89] found id: "84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:45.536539  212114 cri.go:89] found id: ""
	I1009 19:40:45.536547  212114 logs.go:282] 2 containers: [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4]
	I1009 19:40:45.536606  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.542255  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.546144  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:40:45.546224  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:40:45.610102  212114 cri.go:89] found id: "855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:45.610126  212114 cri.go:89] found id: "187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:45.610131  212114 cri.go:89] found id: ""
	I1009 19:40:45.610138  212114 logs.go:282] 2 containers: [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e]
	I1009 19:40:45.610196  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.614464  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.618470  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:40:45.618545  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:40:45.669697  212114 cri.go:89] found id: "4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:45.669720  212114 cri.go:89] found id: "e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:45.669726  212114 cri.go:89] found id: ""
	I1009 19:40:45.669733  212114 logs.go:282] 2 containers: [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e]
	I1009 19:40:45.669794  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.676366  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.680603  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:40:45.680692  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:40:45.749148  212114 cri.go:89] found id: "a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:45.749173  212114 cri.go:89] found id: "d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:45.749178  212114 cri.go:89] found id: ""
	I1009 19:40:45.749185  212114 logs.go:282] 2 containers: [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4]
	I1009 19:40:45.749242  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.753756  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.757836  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1009 19:40:45.757914  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:40:45.935969  212114 cri.go:89] found id: "c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:45.935996  212114 cri.go:89] found id: "c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:45.936001  212114 cri.go:89] found id: ""
	I1009 19:40:45.936008  212114 logs.go:282] 2 containers: [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c]
	I1009 19:40:45.936065  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.940437  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.953098  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 19:40:45.953178  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 19:40:46.012160  212114 cri.go:89] found id: "ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:46.012185  212114 cri.go:89] found id: ""
	I1009 19:40:46.012193  212114 logs.go:282] 1 containers: [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60]
	I1009 19:40:46.012253  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.016624  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1009 19:40:46.016713  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 19:40:46.083357  212114 cri.go:89] found id: "70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:46.083383  212114 cri.go:89] found id: "932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:46.083388  212114 cri.go:89] found id: ""
	I1009 19:40:46.083396  212114 logs.go:282] 2 containers: [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f]
	I1009 19:40:46.083456  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.087674  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.091613  212114 logs.go:123] Gathering logs for storage-provisioner [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c] ...
	I1009 19:40:46.091642  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:46.144851  212114 logs.go:123] Gathering logs for etcd [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a] ...
	I1009 19:40:46.144890  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:46.214756  212114 logs.go:123] Gathering logs for coredns [84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4] ...
	I1009 19:40:46.214788  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:46.268020  212114 logs.go:123] Gathering logs for kube-scheduler [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd] ...
	I1009 19:40:46.268058  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:46.318517  212114 logs.go:123] Gathering logs for kube-scheduler [187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e] ...
	I1009 19:40:46.318547  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:46.374095  212114 logs.go:123] Gathering logs for kube-controller-manager [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184] ...
	I1009 19:40:46.374136  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:46.460506  212114 logs.go:123] Gathering logs for kindnet [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9] ...
	I1009 19:40:46.460541  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:46.524819  212114 logs.go:123] Gathering logs for kindnet [c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c] ...
	I1009 19:40:46.524849  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:46.583620  212114 logs.go:123] Gathering logs for kubelet ...
	I1009 19:40:46.583702  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 19:40:46.644048  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141088     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-l28zw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l28zw" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644351  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141226     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644618  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141301     655 reflector.go:138] object-"kube-system"/"kindnet-token-ch425": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ch425" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644879  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141420     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645133  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141526     655 reflector.go:138] object-"kube-system"/"coredns-token-x8mx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-x8mx9" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645400  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.145947     655 reflector.go:138] object-"kube-system"/"metrics-server-token-n5d8s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-n5d8s" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645645  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146058     655 reflector.go:138] object-"default"/"default-token-dtkng": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-dtkng" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645924  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146124     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8fzd6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8fzd6" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.654238  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.568696     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.655773  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.908103     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.658740  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:18 old-k8s-version-135957 kubelet[655]: E1009 19:35:18.807162     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.660983  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:27 old-k8s-version-135957 kubelet[655]: E1009 19:35:27.017732     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.661371  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:28 old-k8s-version-135957 kubelet[655]: E1009 19:35:28.030088     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.661645  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:29 old-k8s-version-135957 kubelet[655]: E1009 19:35:29.798736     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.662027  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:34 old-k8s-version-135957 kubelet[655]: E1009 19:35:34.224840     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.662893  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:36 old-k8s-version-135957 kubelet[655]: E1009 19:35:36.070564     655 pod_workers.go:191] Error syncing pod dbfd3538-0cb4-4cf0-b208-e18c725f6d5d ("storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"
	W1009 19:40:46.665881  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:40 old-k8s-version-135957 kubelet[655]: E1009 19:35:40.810507     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.666940  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:48 old-k8s-version-135957 kubelet[655]: E1009 19:35:48.113466     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.667468  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.225991     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.667683  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.805911     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.667944  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:07 old-k8s-version-135957 kubelet[655]: E1009 19:36:07.794321     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.668632  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:10 old-k8s-version-135957 kubelet[655]: E1009 19:36:10.183835     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.669178  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:14 old-k8s-version-135957 kubelet[655]: E1009 19:36:14.226874     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.669376  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:20 old-k8s-version-135957 kubelet[655]: E1009 19:36:20.794329     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.669743  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:29 old-k8s-version-135957 kubelet[655]: E1009 19:36:29.793939     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.672283  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:33 old-k8s-version-135957 kubelet[655]: E1009 19:36:33.803424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.672629  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:41 old-k8s-version-135957 kubelet[655]: E1009 19:36:41.794984     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.672973  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:44 old-k8s-version-135957 kubelet[655]: E1009 19:36:44.794752     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.673626  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.299505     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.673854  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.794264     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.674216  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:04 old-k8s-version-135957 kubelet[655]: E1009 19:37:04.225263     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.674431  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:10 old-k8s-version-135957 kubelet[655]: E1009 19:37:10.794424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.674804  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:19 old-k8s-version-135957 kubelet[655]: E1009 19:37:19.794014     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.675040  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:25 old-k8s-version-135957 kubelet[655]: E1009 19:37:25.794260     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.675400  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:33 old-k8s-version-135957 kubelet[655]: E1009 19:37:33.794463     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.675661  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:36 old-k8s-version-135957 kubelet[655]: E1009 19:37:36.794403     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.676031  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:46 old-k8s-version-135957 kubelet[655]: E1009 19:37:46.794633     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.676261  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:48 old-k8s-version-135957 kubelet[655]: E1009 19:37:48.796873     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.676632  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:59 old-k8s-version-135957 kubelet[655]: E1009 19:37:59.794479     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.679482  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:03 old-k8s-version-135957 kubelet[655]: E1009 19:38:03.802412     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.679873  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:12 old-k8s-version-135957 kubelet[655]: E1009 19:38:12.799928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.680105  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:15 old-k8s-version-135957 kubelet[655]: E1009 19:38:15.794598     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.680775  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:27 old-k8s-version-135957 kubelet[655]: E1009 19:38:27.530323     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.680993  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:28 old-k8s-version-135957 kubelet[655]: E1009 19:38:28.795022     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.681361  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:34 old-k8s-version-135957 kubelet[655]: E1009 19:38:34.226016     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.681590  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:39 old-k8s-version-135957 kubelet[655]: E1009 19:38:39.794229     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.681999  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:49 old-k8s-version-135957 kubelet[655]: E1009 19:38:49.795059     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.682230  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:51 old-k8s-version-135957 kubelet[655]: E1009 19:38:51.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.682621  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.793928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.682835  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.795042     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.683069  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:16 old-k8s-version-135957 kubelet[655]: E1009 19:39:16.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.683468  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:18 old-k8s-version-135957 kubelet[655]: E1009 19:39:18.794102     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.683703  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:27 old-k8s-version-135957 kubelet[655]: E1009 19:39:27.794317     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.684106  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: E1009 19:39:30.794130     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.684324  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:42 old-k8s-version-135957 kubelet[655]: E1009 19:39:42.794279     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.684724  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: E1009 19:39:43.794072     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.685423  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.806354     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.686352  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.812240     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.687361  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.687730  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.687961  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.688346  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.688589  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.688999  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.692433  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:46.692450  212114 logs.go:123] Gathering logs for kube-apiserver [4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797] ...
	I1009 19:40:46.692464  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:46.775228  212114 logs.go:123] Gathering logs for etcd [1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f] ...
	I1009 19:40:46.775261  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:46.843804  212114 logs.go:123] Gathering logs for kube-controller-manager [d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4] ...
	I1009 19:40:46.843886  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:46.910698  212114 logs.go:123] Gathering logs for containerd ...
	I1009 19:40:46.910734  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1009 19:40:46.996012  212114 logs.go:123] Gathering logs for dmesg ...
	I1009 19:40:46.996052  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:40:47.015084  212114 logs.go:123] Gathering logs for kube-apiserver [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880] ...
	I1009 19:40:47.015112  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:47.107848  212114 logs.go:123] Gathering logs for kubernetes-dashboard [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60] ...
	I1009 19:40:47.107928  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:47.154334  212114 logs.go:123] Gathering logs for storage-provisioner [932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f] ...
	I1009 19:40:47.154412  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:47.203396  212114 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:40:47.203474  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 19:40:47.462255  212114 logs.go:123] Gathering logs for coredns [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33] ...
	I1009 19:40:47.462327  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:47.515099  212114 logs.go:123] Gathering logs for kube-proxy [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454] ...
	I1009 19:40:47.515182  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:47.569213  212114 logs.go:123] Gathering logs for kube-proxy [e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e] ...
	I1009 19:40:47.569289  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:47.620464  212114 logs.go:123] Gathering logs for container status ...
	I1009 19:40:47.620545  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:40:47.675639  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:47.675797  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 19:40:47.675885  212114 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1009 19:40:47.675951  212114 out.go:270]   Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:47.675995  212114 out.go:270]   Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	  Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:47.676047  212114 out.go:270]   Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:47.676086  212114 out.go:270]   Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	  Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:47.676137  212114 out.go:270]   Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:47.676196  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:47.676228  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:57.677635  212114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:40:57.689852  212114 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:40:57.692719  212114 out.go:201] 
	W1009 19:40:57.694057  212114 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1009 19:40:57.694098  212114 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1009 19:40:57.694120  212114 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1009 19:40:57.694126  212114 out.go:270] * 
	* 
	W1009 19:40:57.695133  212114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:40:57.697281  212114 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-135957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-135957
helpers_test.go:235: (dbg) docker inspect old-k8s-version-135957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4",
	        "Created": "2024-10-09T19:31:45.624893661Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 212309,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T19:34:39.330687036Z",
	            "FinishedAt": "2024-10-09T19:34:38.058767221Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4/hosts",
	        "LogPath": "/var/lib/docker/containers/9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4/9dcb067299db8a96cc9f7de24ff02d455c700da52eeecfd279d8e0b31b9f7cb4-json.log",
	        "Name": "/old-k8s-version-135957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-135957:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-135957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2ce229899914a4c92a1d027b36098efdde96e93e1e39e3832306b4892ad80d83-init/diff:/var/lib/docker/overlay2/b874d444a15868350f8fd5f52e8f0ed756efd8ce6e723f3b60197aecd7f71b6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2ce229899914a4c92a1d027b36098efdde96e93e1e39e3832306b4892ad80d83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2ce229899914a4c92a1d027b36098efdde96e93e1e39e3832306b4892ad80d83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2ce229899914a4c92a1d027b36098efdde96e93e1e39e3832306b4892ad80d83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-135957",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-135957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-135957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-135957",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-135957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "709597d7178708ad81e768ce3de65f41ef2090eb5f67240a5305d04d3e6ebd22",
	            "SandboxKey": "/var/run/docker/netns/709597d71787",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-135957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e0ecbcb012de88031158533d719b809fba94a66747c8e5e14b6a79021f77b92a",
	                    "EndpointID": "5d11a65746d3263a1d48d80c054b74eb86ef0e29c0b6682413e444a50a1fffda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-135957",
	                        "9dcb067299db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135957 -n old-k8s-version-135957
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-135957 logs -n 25: (2.364405202s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-383078                              | cert-expiration-383078       | jenkins | v1.34.0 | 09 Oct 24 19:30 UTC | 09 Oct 24 19:31 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-872019                               | force-systemd-env-872019     | jenkins | v1.34.0 | 09 Oct 24 19:30 UTC | 09 Oct 24 19:30 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-872019                            | force-systemd-env-872019     | jenkins | v1.34.0 | 09 Oct 24 19:30 UTC | 09 Oct 24 19:31 UTC |
	| start   | -p cert-options-480357                                 | cert-options-480357          | jenkins | v1.34.0 | 09 Oct 24 19:31 UTC | 09 Oct 24 19:31 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-480357 ssh                                | cert-options-480357          | jenkins | v1.34.0 | 09 Oct 24 19:31 UTC | 09 Oct 24 19:31 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-480357 -- sudo                         | cert-options-480357          | jenkins | v1.34.0 | 09 Oct 24 19:31 UTC | 09 Oct 24 19:31 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-480357                                 | cert-options-480357          | jenkins | v1.34.0 | 09 Oct 24 19:31 UTC | 09 Oct 24 19:31 UTC |
	| start   | -p old-k8s-version-135957                              | old-k8s-version-135957       | jenkins | v1.34.0 | 09 Oct 24 19:31 UTC | 09 Oct 24 19:34 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-383078                              | cert-expiration-383078       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:34 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-383078                              | cert-expiration-383078       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:34 UTC |
	| start   | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:35 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-135957        | old-k8s-version-135957       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-135957                              | old-k8s-version-135957       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:34 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-135957             | old-k8s-version-135957       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC | 09 Oct 24 19:34 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-135957                              | old-k8s-version-135957       | jenkins | v1.34.0 | 09 Oct 24 19:34 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083200  | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:35 UTC | 09 Oct 24 19:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:35 UTC | 09 Oct 24 19:35 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083200       | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:35 UTC | 09 Oct 24 19:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:35 UTC | 09 Oct 24 19:40 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-083200                           | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC | 09 Oct 24 19:40 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC | 09 Oct 24 19:40 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC | 09 Oct 24 19:40 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC | 09 Oct 24 19:40 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-083200 | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC | 09 Oct 24 19:40 UTC |
	|         | default-k8s-diff-port-083200                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-269650                                  | embed-certs-269650           | jenkins | v1.34.0 | 09 Oct 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:40:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:40:26.489645  222294 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:40:26.489852  222294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:26.489880  222294 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:26.489903  222294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:26.490255  222294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:40:26.490768  222294 out.go:352] Setting JSON to false
	I1009 19:40:26.492042  222294 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4970,"bootTime":1728497857,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 19:40:26.492137  222294 start.go:139] virtualization:  
	I1009 19:40:26.494342  222294 out.go:177] * [embed-certs-269650] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 19:40:26.496663  222294 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:40:26.496791  222294 notify.go:220] Checking for updates...
	I1009 19:40:26.499942  222294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:40:26.501946  222294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 19:40:26.504521  222294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 19:40:26.506264  222294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:40:26.507754  222294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:40:26.510295  222294 config.go:182] Loaded profile config "old-k8s-version-135957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1009 19:40:26.510392  222294 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:40:26.530205  222294 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:40:26.530337  222294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:26.591066  222294 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:40:26.579122767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:40:26.591180  222294 docker.go:318] overlay module found
	I1009 19:40:26.592869  222294 out.go:177] * Using the docker driver based on user configuration
	I1009 19:40:26.595035  222294 start.go:297] selected driver: docker
	I1009 19:40:26.595053  222294 start.go:901] validating driver "docker" against <nil>
	I1009 19:40:26.595067  222294 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:40:26.595699  222294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:40:26.646049  222294 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:40:26.6364872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:40:26.646265  222294 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 19:40:26.646490  222294 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:40:26.648166  222294 out.go:177] * Using Docker driver with root privileges
	I1009 19:40:26.650208  222294 cni.go:84] Creating CNI manager for ""
	I1009 19:40:26.650286  222294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 19:40:26.650303  222294 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:40:26.650385  222294 start.go:340] cluster config:
	{Name:embed-certs-269650 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-269650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:26.652181  222294 out.go:177] * Starting "embed-certs-269650" primary control-plane node in "embed-certs-269650" cluster
	I1009 19:40:26.653916  222294 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1009 19:40:26.656391  222294 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 19:40:26.658267  222294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 19:40:26.658311  222294 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 19:40:26.658319  222294 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1009 19:40:26.658329  222294 cache.go:56] Caching tarball of preloaded images
	I1009 19:40:26.658403  222294 preload.go:172] Found /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1009 19:40:26.658412  222294 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1009 19:40:26.658520  222294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/config.json ...
	I1009 19:40:26.658537  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/config.json: {Name:mk225ea6130476915b385ad0571afe8e67d7e653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:26.677955  222294 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1009 19:40:26.677977  222294 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1009 19:40:26.677991  222294 cache.go:194] Successfully downloaded all kic artifacts
	I1009 19:40:26.678013  222294 start.go:360] acquireMachinesLock for embed-certs-269650: {Name:mk3d1962565abb0952d504a83857c81bcdfb71a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:40:26.678117  222294 start.go:364] duration metric: took 83.88µs to acquireMachinesLock for "embed-certs-269650"
	I1009 19:40:26.678148  222294 start.go:93] Provisioning new machine with config: &{Name:embed-certs-269650 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-269650 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1009 19:40:26.678225  222294 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:40:23.983697  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:26.483244  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:28.484264  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:26.680907  222294 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1009 19:40:26.681170  222294 start.go:159] libmachine.API.Create for "embed-certs-269650" (driver="docker")
	I1009 19:40:26.681206  222294 client.go:168] LocalClient.Create starting
	I1009 19:40:26.681283  222294 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem
	I1009 19:40:26.681321  222294 main.go:141] libmachine: Decoding PEM data...
	I1009 19:40:26.681339  222294 main.go:141] libmachine: Parsing certificate...
	I1009 19:40:26.681390  222294 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem
	I1009 19:40:26.681411  222294 main.go:141] libmachine: Decoding PEM data...
	I1009 19:40:26.681427  222294 main.go:141] libmachine: Parsing certificate...
	I1009 19:40:26.681803  222294 cli_runner.go:164] Run: docker network inspect embed-certs-269650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:40:26.700520  222294 cli_runner.go:211] docker network inspect embed-certs-269650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:40:26.700599  222294 network_create.go:284] running [docker network inspect embed-certs-269650] to gather additional debugging logs...
	I1009 19:40:26.700620  222294 cli_runner.go:164] Run: docker network inspect embed-certs-269650
	W1009 19:40:26.715591  222294 cli_runner.go:211] docker network inspect embed-certs-269650 returned with exit code 1
	I1009 19:40:26.715622  222294 network_create.go:287] error running [docker network inspect embed-certs-269650]: docker network inspect embed-certs-269650: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-269650 not found
	I1009 19:40:26.715634  222294 network_create.go:289] output of [docker network inspect embed-certs-269650]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-269650 not found
	
	** /stderr **
	I1009 19:40:26.715722  222294 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:26.731726  222294 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bda550f8dcd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:31:98:1f:4a} reservation:<nil>}
	I1009 19:40:26.732143  222294 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4f797a1a661b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:fc:32:f4:94} reservation:<nil>}
	I1009 19:40:26.732552  222294 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3956c1bd1495 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:32:83:c1:05} reservation:<nil>}
	I1009 19:40:26.732959  222294 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e0ecbcb012de IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:b9:ec:0f:ad} reservation:<nil>}
	I1009 19:40:26.733476  222294 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2e0d0}
	I1009 19:40:26.733516  222294 network_create.go:124] attempt to create docker network embed-certs-269650 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1009 19:40:26.733580  222294 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-269650 embed-certs-269650
	I1009 19:40:26.809451  222294 network_create.go:108] docker network embed-certs-269650 192.168.85.0/24 created
	I1009 19:40:26.809484  222294 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-269650" container
	I1009 19:40:26.809556  222294 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:40:26.824742  222294 cli_runner.go:164] Run: docker volume create embed-certs-269650 --label name.minikube.sigs.k8s.io=embed-certs-269650 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:40:26.842771  222294 oci.go:103] Successfully created a docker volume embed-certs-269650
	I1009 19:40:26.842869  222294 cli_runner.go:164] Run: docker run --rm --name embed-certs-269650-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-269650 --entrypoint /usr/bin/test -v embed-certs-269650:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 19:40:27.479996  222294 oci.go:107] Successfully prepared a docker volume embed-certs-269650
	I1009 19:40:27.480049  222294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 19:40:27.480081  222294 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:40:27.480152  222294 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-269650:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:40:30.985198  212114 pod_ready.go:103] pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace has status "Ready":"False"
	I1009 19:40:32.983535  212114 pod_ready.go:82] duration metric: took 4m0.007360452s for pod "metrics-server-9975d5f86-jcsl5" in "kube-system" namespace to be "Ready" ...
	E1009 19:40:32.983561  212114 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 19:40:32.983570  212114 pod_ready.go:39] duration metric: took 5m29.838077957s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:40:32.983586  212114 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:40:32.983616  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:40:32.983677  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:40:33.049530  212114 cri.go:89] found id: "a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:33.049553  212114 cri.go:89] found id: "4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:33.049558  212114 cri.go:89] found id: ""
	I1009 19:40:33.049566  212114 logs.go:282] 2 containers: [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797]
	I1009 19:40:33.049621  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.062880  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.069016  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1009 19:40:33.069102  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:40:33.148317  212114 cri.go:89] found id: "5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:33.148337  212114 cri.go:89] found id: "1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:33.148342  212114 cri.go:89] found id: ""
	I1009 19:40:33.148356  212114 logs.go:282] 2 containers: [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f]
	I1009 19:40:33.148419  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.152544  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.158280  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1009 19:40:33.158385  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:40:33.250632  212114 cri.go:89] found id: "63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:33.250656  212114 cri.go:89] found id: "84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:33.250662  212114 cri.go:89] found id: ""
	I1009 19:40:33.250669  212114 logs.go:282] 2 containers: [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4]
	I1009 19:40:33.250761  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.254707  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.258160  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:40:33.258276  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:40:33.329677  212114 cri.go:89] found id: "855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:33.329698  212114 cri.go:89] found id: "187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:33.329703  212114 cri.go:89] found id: ""
	I1009 19:40:33.329711  212114 logs.go:282] 2 containers: [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e]
	I1009 19:40:33.329770  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.335537  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.341688  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:40:33.341770  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:40:33.409335  212114 cri.go:89] found id: "4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:33.409361  212114 cri.go:89] found id: "e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:33.409365  212114 cri.go:89] found id: ""
	I1009 19:40:33.409372  212114 logs.go:282] 2 containers: [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e]
	I1009 19:40:33.409428  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.414175  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.418573  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:40:33.418657  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:40:33.480197  212114 cri.go:89] found id: "a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:33.480228  212114 cri.go:89] found id: "d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:33.480232  212114 cri.go:89] found id: ""
	I1009 19:40:33.480240  212114 logs.go:282] 2 containers: [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4]
	I1009 19:40:33.480322  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.484759  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.489087  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1009 19:40:33.489206  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:40:33.557450  212114 cri.go:89] found id: "c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:33.557473  212114 cri.go:89] found id: "c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:33.557479  212114 cri.go:89] found id: ""
	I1009 19:40:33.557685  212114 logs.go:282] 2 containers: [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c]
	I1009 19:40:33.557763  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.564818  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.574510  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 19:40:33.574618  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 19:40:33.671672  212114 cri.go:89] found id: "ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:33.671726  212114 cri.go:89] found id: ""
	I1009 19:40:33.671736  212114 logs.go:282] 1 containers: [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60]
	I1009 19:40:33.671806  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.677087  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1009 19:40:33.677159  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 19:40:33.763257  212114 cri.go:89] found id: "70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:33.763285  212114 cri.go:89] found id: "932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:33.763291  212114 cri.go:89] found id: ""
	I1009 19:40:33.763298  212114 logs.go:282] 2 containers: [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f]
	I1009 19:40:33.763356  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.767028  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:33.770406  212114 logs.go:123] Gathering logs for etcd [1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f] ...
	I1009 19:40:33.770425  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:31.834322  222294 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-269650:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.354107961s)
	I1009 19:40:31.834351  222294 kic.go:203] duration metric: took 4.354267842s to extract preloaded images to volume ...
	W1009 19:40:31.834488  222294 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 19:40:31.834623  222294 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:40:31.886353  222294 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-269650 --name embed-certs-269650 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-269650 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-269650 --network embed-certs-269650 --ip 192.168.85.2 --volume embed-certs-269650:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 19:40:32.220890  222294 cli_runner.go:164] Run: docker container inspect embed-certs-269650 --format={{.State.Running}}
	I1009 19:40:32.241979  222294 cli_runner.go:164] Run: docker container inspect embed-certs-269650 --format={{.State.Status}}
	I1009 19:40:32.264831  222294 cli_runner.go:164] Run: docker exec embed-certs-269650 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:40:32.344747  222294 oci.go:144] the created container "embed-certs-269650" has a running status.
	I1009 19:40:32.344774  222294 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa...
	I1009 19:40:32.684175  222294 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:40:32.710696  222294 cli_runner.go:164] Run: docker container inspect embed-certs-269650 --format={{.State.Status}}
	I1009 19:40:32.739326  222294 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:40:32.739345  222294 kic_runner.go:114] Args: [docker exec --privileged embed-certs-269650 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:40:32.825699  222294 cli_runner.go:164] Run: docker container inspect embed-certs-269650 --format={{.State.Status}}
	I1009 19:40:32.855636  222294 machine.go:93] provisionDockerMachine start ...
	I1009 19:40:32.855735  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:32.887428  222294 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:32.887689  222294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1009 19:40:32.887699  222294 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:40:33.073172  222294 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269650
	
	I1009 19:40:33.073199  222294 ubuntu.go:169] provisioning hostname "embed-certs-269650"
	I1009 19:40:33.073273  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:33.097412  222294 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:33.097725  222294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1009 19:40:33.097747  222294 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-269650 && echo "embed-certs-269650" | sudo tee /etc/hostname
	I1009 19:40:33.312776  222294 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-269650
	
	I1009 19:40:33.312867  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:33.339405  222294 main.go:141] libmachine: Using SSH client type: native
	I1009 19:40:33.339726  222294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1009 19:40:33.339751  222294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-269650' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-269650/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-269650' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:40:33.517148  222294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:40:33.517176  222294 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-2290/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-2290/.minikube}
	I1009 19:40:33.517197  222294 ubuntu.go:177] setting up certificates
	I1009 19:40:33.517205  222294 provision.go:84] configureAuth start
	I1009 19:40:33.517266  222294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-269650
	I1009 19:40:33.547901  222294 provision.go:143] copyHostCerts
	I1009 19:40:33.547970  222294 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem, removing ...
	I1009 19:40:33.547983  222294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem
	I1009 19:40:33.548060  222294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/ca.pem (1078 bytes)
	I1009 19:40:33.548158  222294 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem, removing ...
	I1009 19:40:33.548170  222294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem
	I1009 19:40:33.548198  222294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/cert.pem (1123 bytes)
	I1009 19:40:33.548269  222294 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem, removing ...
	I1009 19:40:33.548286  222294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem
	I1009 19:40:33.548313  222294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-2290/.minikube/key.pem (1679 bytes)
	I1009 19:40:33.548377  222294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem org=jenkins.embed-certs-269650 san=[127.0.0.1 192.168.85.2 embed-certs-269650 localhost minikube]
	I1009 19:40:33.951433  222294 provision.go:177] copyRemoteCerts
	I1009 19:40:33.951571  222294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:40:33.951635  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:33.991075  222294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa Username:docker}
	I1009 19:40:34.102412  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 19:40:34.130620  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 19:40:34.162804  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:40:34.187772  222294 provision.go:87] duration metric: took 670.545374ms to configureAuth
	I1009 19:40:34.187805  222294 ubuntu.go:193] setting minikube options for container-runtime
	I1009 19:40:34.188002  222294 config.go:182] Loaded profile config "embed-certs-269650": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:40:34.188016  222294 machine.go:96] duration metric: took 1.332359878s to provisionDockerMachine
	I1009 19:40:34.188023  222294 client.go:171] duration metric: took 7.506806066s to LocalClient.Create
	I1009 19:40:34.188037  222294 start.go:167] duration metric: took 7.506867044s to libmachine.API.Create "embed-certs-269650"
	I1009 19:40:34.188048  222294 start.go:293] postStartSetup for "embed-certs-269650" (driver="docker")
	I1009 19:40:34.188058  222294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:40:34.188121  222294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:40:34.188164  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:34.216674  222294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa Username:docker}
	I1009 19:40:34.322105  222294 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:40:34.325752  222294 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:40:34.325794  222294 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 19:40:34.325806  222294 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 19:40:34.325813  222294 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 19:40:34.325837  222294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/addons for local assets ...
	I1009 19:40:34.325893  222294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-2290/.minikube/files for local assets ...
	I1009 19:40:34.325977  222294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem -> 75962.pem in /etc/ssl/certs
	I1009 19:40:34.326089  222294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:40:34.336247  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem --> /etc/ssl/certs/75962.pem (1708 bytes)
	I1009 19:40:34.362874  222294 start.go:296] duration metric: took 174.812137ms for postStartSetup
	I1009 19:40:34.363231  222294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-269650
	I1009 19:40:34.383351  222294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/config.json ...
	I1009 19:40:34.383640  222294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:40:34.383694  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:34.405686  222294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa Username:docker}
	I1009 19:40:34.498491  222294 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:40:34.504800  222294 start.go:128] duration metric: took 7.82655928s to createHost
	I1009 19:40:34.504826  222294 start.go:83] releasing machines lock for "embed-certs-269650", held for 7.826694777s
	I1009 19:40:34.504907  222294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-269650
	I1009 19:40:34.528454  222294 ssh_runner.go:195] Run: cat /version.json
	I1009 19:40:34.528510  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:34.528755  222294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:40:34.528829  222294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-269650
	I1009 19:40:34.562978  222294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa Username:docker}
	I1009 19:40:34.570239  222294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/embed-certs-269650/id_rsa Username:docker}
	I1009 19:40:34.822009  222294 ssh_runner.go:195] Run: systemctl --version
	I1009 19:40:34.827198  222294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:40:34.831717  222294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1009 19:40:34.860499  222294 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1009 19:40:34.860593  222294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:40:34.896792  222294 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 19:40:34.896833  222294 start.go:495] detecting cgroup driver to use...
	I1009 19:40:34.896866  222294 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 19:40:34.896937  222294 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 19:40:34.912474  222294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 19:40:34.926652  222294 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:40:34.926718  222294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:40:34.952875  222294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:40:34.971606  222294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:40:35.106013  222294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:40:35.229188  222294 docker.go:233] disabling docker service ...
	I1009 19:40:35.229267  222294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:40:35.264716  222294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:40:35.282980  222294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:40:35.382544  222294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:40:35.470655  222294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:40:35.483274  222294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:40:35.504686  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1009 19:40:35.516122  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 19:40:35.526917  222294 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 19:40:35.527031  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 19:40:35.537952  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 19:40:35.549796  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 19:40:35.559488  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 19:40:35.569451  222294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:40:35.578931  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 19:40:35.589121  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1009 19:40:35.599155  222294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1009 19:40:35.609509  222294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:40:35.618577  222294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:40:35.627383  222294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:35.707742  222294 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 19:40:35.869739  222294 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1009 19:40:35.869820  222294 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1009 19:40:35.874069  222294 start.go:563] Will wait 60s for crictl version
	I1009 19:40:35.874136  222294 ssh_runner.go:195] Run: which crictl
	I1009 19:40:35.877784  222294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:40:35.925914  222294 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1009 19:40:35.925984  222294 ssh_runner.go:195] Run: containerd --version
	I1009 19:40:35.952223  222294 ssh_runner.go:195] Run: containerd --version
	I1009 19:40:35.976093  222294 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1009 19:40:35.977128  222294 cli_runner.go:164] Run: docker network inspect embed-certs-269650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:40:35.993221  222294 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1009 19:40:35.996973  222294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:36.007949  222294 kubeadm.go:883] updating cluster {Name:embed-certs-269650 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-269650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:40:36.008066  222294 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 19:40:36.008126  222294 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:36.052360  222294 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 19:40:36.052386  222294 containerd.go:534] Images already preloaded, skipping extraction
	I1009 19:40:36.052454  222294 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:40:36.092037  222294 containerd.go:627] all images are preloaded for containerd runtime.
	I1009 19:40:36.092061  222294 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:40:36.092070  222294 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I1009 19:40:36.092166  222294 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-269650 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-269650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:40:36.092240  222294 ssh_runner.go:195] Run: sudo crictl info
	I1009 19:40:36.134983  222294 cni.go:84] Creating CNI manager for ""
	I1009 19:40:36.135010  222294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 19:40:36.135020  222294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:40:36.135045  222294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-269650 NodeName:embed-certs-269650 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:40:36.135190  222294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-269650"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:40:36.135271  222294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:40:36.146145  222294 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:40:36.146285  222294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:40:36.155234  222294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1009 19:40:36.174369  222294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:40:36.194645  222294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 19:40:36.217413  222294 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:40:36.220896  222294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:40:36.232146  222294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:40:36.323823  222294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:40:36.340475  222294 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650 for IP: 192.168.85.2
	I1009 19:40:36.340497  222294 certs.go:194] generating shared ca certs ...
	I1009 19:40:36.340513  222294 certs.go:226] acquiring lock for ca certs: {Name:mke6990d9a3fb276a87991bc9cbf7d64b4192c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:36.340732  222294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key
	I1009 19:40:36.340805  222294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key
	I1009 19:40:36.340820  222294 certs.go:256] generating profile certs ...
	I1009 19:40:36.340901  222294 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.key
	I1009 19:40:36.340928  222294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.crt with IP's: []
	I1009 19:40:33.826020  212114 logs.go:123] Gathering logs for coredns [84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4] ...
	I1009 19:40:33.826064  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:33.879106  212114 logs.go:123] Gathering logs for kube-scheduler [187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e] ...
	I1009 19:40:33.879135  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:33.930392  212114 logs.go:123] Gathering logs for containerd ...
	I1009 19:40:33.930428  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1009 19:40:34.004770  212114 logs.go:123] Gathering logs for container status ...
	I1009 19:40:34.004799  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:40:34.056516  212114 logs.go:123] Gathering logs for kubelet ...
	I1009 19:40:34.056599  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 19:40:34.117909  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141088     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-l28zw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l28zw" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118146  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141226     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118362  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141301     655 reflector.go:138] object-"kube-system"/"kindnet-token-ch425": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ch425" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118562  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141420     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118773  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141526     655 reflector.go:138] object-"kube-system"/"coredns-token-x8mx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-x8mx9" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.118996  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.145947     655 reflector.go:138] object-"kube-system"/"metrics-server-token-n5d8s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-n5d8s" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.119211  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146058     655 reflector.go:138] object-"default"/"default-token-dtkng": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-dtkng" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.119436  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146124     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8fzd6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8fzd6" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:34.127887  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.568696     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.129333  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.908103     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.133313  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:18 old-k8s-version-135957 kubelet[655]: E1009 19:35:18.807162     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.135414  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:27 old-k8s-version-135957 kubelet[655]: E1009 19:35:27.017732     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.135765  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:28 old-k8s-version-135957 kubelet[655]: E1009 19:35:28.030088     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.135969  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:29 old-k8s-version-135957 kubelet[655]: E1009 19:35:29.798736     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.136376  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:34 old-k8s-version-135957 kubelet[655]: E1009 19:35:34.224840     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.137166  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:36 old-k8s-version-135957 kubelet[655]: E1009 19:35:36.070564     655 pod_workers.go:191] Error syncing pod dbfd3538-0cb4-4cf0-b208-e18c725f6d5d ("storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"
	W1009 19:40:34.139585  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:40 old-k8s-version-135957 kubelet[655]: E1009 19:35:40.810507     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.140516  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:48 old-k8s-version-135957 kubelet[655]: E1009 19:35:48.113466     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.140979  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.225991     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.141166  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.805911     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.141347  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:07 old-k8s-version-135957 kubelet[655]: E1009 19:36:07.794321     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.141929  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:10 old-k8s-version-135957 kubelet[655]: E1009 19:36:10.183835     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.142254  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:14 old-k8s-version-135957 kubelet[655]: E1009 19:36:14.226874     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.142436  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:20 old-k8s-version-135957 kubelet[655]: E1009 19:36:20.794329     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.142759  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:29 old-k8s-version-135957 kubelet[655]: E1009 19:36:29.793939     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.145371  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:33 old-k8s-version-135957 kubelet[655]: E1009 19:36:33.803424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.145703  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:41 old-k8s-version-135957 kubelet[655]: E1009 19:36:41.794984     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.145888  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:44 old-k8s-version-135957 kubelet[655]: E1009 19:36:44.794752     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.146482  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.299505     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.146665  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.794264     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.146989  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:04 old-k8s-version-135957 kubelet[655]: E1009 19:37:04.225263     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.147171  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:10 old-k8s-version-135957 kubelet[655]: E1009 19:37:10.794424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.147494  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:19 old-k8s-version-135957 kubelet[655]: E1009 19:37:19.794014     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.147679  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:25 old-k8s-version-135957 kubelet[655]: E1009 19:37:25.794260     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.148013  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:33 old-k8s-version-135957 kubelet[655]: E1009 19:37:33.794463     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.148203  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:36 old-k8s-version-135957 kubelet[655]: E1009 19:37:36.794403     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.148534  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:46 old-k8s-version-135957 kubelet[655]: E1009 19:37:46.794633     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.148729  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:48 old-k8s-version-135957 kubelet[655]: E1009 19:37:48.796873     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.149055  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:59 old-k8s-version-135957 kubelet[655]: E1009 19:37:59.794479     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.151483  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:03 old-k8s-version-135957 kubelet[655]: E1009 19:38:03.802412     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:34.152171  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:12 old-k8s-version-135957 kubelet[655]: E1009 19:38:12.799928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.152410  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:15 old-k8s-version-135957 kubelet[655]: E1009 19:38:15.794598     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.153011  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:27 old-k8s-version-135957 kubelet[655]: E1009 19:38:27.530323     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.153194  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:28 old-k8s-version-135957 kubelet[655]: E1009 19:38:28.795022     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.153519  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:34 old-k8s-version-135957 kubelet[655]: E1009 19:38:34.226016     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.153707  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:39 old-k8s-version-135957 kubelet[655]: E1009 19:38:39.794229     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154038  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:49 old-k8s-version-135957 kubelet[655]: E1009 19:38:49.795059     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.154225  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:51 old-k8s-version-135957 kubelet[655]: E1009 19:38:51.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154565  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.793928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.154754  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.795042     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.154941  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:16 old-k8s-version-135957 kubelet[655]: E1009 19:39:16.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.155340  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:18 old-k8s-version-135957 kubelet[655]: E1009 19:39:18.794102     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.155552  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:27 old-k8s-version-135957 kubelet[655]: E1009 19:39:27.794317     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.155882  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: E1009 19:39:30.794130     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156066  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:42 old-k8s-version-135957 kubelet[655]: E1009 19:39:42.794279     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.156436  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: E1009 19:39:43.794072     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156775  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.806354     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.156959  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.812240     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157144  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157467  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.157651  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:34.157977  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:34.158159  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:34.158169  212114 logs.go:123] Gathering logs for kube-apiserver [4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797] ...
	I1009 19:40:34.158184  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:34.267905  212114 logs.go:123] Gathering logs for etcd [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a] ...
	I1009 19:40:34.267941  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:34.313520  212114 logs.go:123] Gathering logs for kube-proxy [e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e] ...
	I1009 19:40:34.313547  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:34.369410  212114 logs.go:123] Gathering logs for kube-controller-manager [d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4] ...
	I1009 19:40:34.369434  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:34.435867  212114 logs.go:123] Gathering logs for coredns [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33] ...
	I1009 19:40:34.435938  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:34.477407  212114 logs.go:123] Gathering logs for kube-scheduler [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd] ...
	I1009 19:40:34.477432  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:34.525043  212114 logs.go:123] Gathering logs for kube-proxy [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454] ...
	I1009 19:40:34.525071  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:34.579092  212114 logs.go:123] Gathering logs for kindnet [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9] ...
	I1009 19:40:34.579120  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:34.645088  212114 logs.go:123] Gathering logs for kindnet [c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c] ...
	I1009 19:40:34.645120  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:34.688041  212114 logs.go:123] Gathering logs for kubernetes-dashboard [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60] ...
	I1009 19:40:34.688076  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:34.750518  212114 logs.go:123] Gathering logs for storage-provisioner [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c] ...
	I1009 19:40:34.750548  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:34.792366  212114 logs.go:123] Gathering logs for storage-provisioner [932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f] ...
	I1009 19:40:34.792401  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:34.844432  212114 logs.go:123] Gathering logs for dmesg ...
	I1009 19:40:34.844506  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:40:34.862899  212114 logs.go:123] Gathering logs for kube-apiserver [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880] ...
	I1009 19:40:34.862973  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:34.966604  212114 logs.go:123] Gathering logs for kube-controller-manager [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184] ...
	I1009 19:40:34.966676  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:35.078882  212114 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:40:35.078925  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 19:40:35.317341  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:35.317406  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 19:40:35.317493  212114 out.go:270] X Problems detected in kubelet:
	W1009 19:40:35.317536  212114 out.go:270]   Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:35.317592  212114 out.go:270]   Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:35.317630  212114 out.go:270]   Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:35.317662  212114 out.go:270]   Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:35.317702  212114 out.go:270]   Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:35.317738  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:35.317758  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:36.538245  222294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.crt ...
	I1009 19:40:36.538275  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.crt: {Name:mk646876f00c2860d6e04f53a1e8b63771cabb25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:36.538468  222294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.key ...
	I1009 19:40:36.538482  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/client.key: {Name:mk311c4b244c4773b53d0d130c88f760abe3622a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:36.538978  222294 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key.baa294b0
	I1009 19:40:36.539003  222294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt.baa294b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1009 19:40:36.848916  222294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt.baa294b0 ...
	I1009 19:40:36.848947  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt.baa294b0: {Name:mk50ca7d566f6027841c62287fe60fe2fc1f4030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:36.849135  222294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key.baa294b0 ...
	I1009 19:40:36.849151  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key.baa294b0: {Name:mk3c466636e31a368add5e1ff9c60fd9c02cba7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:36.849695  222294 certs.go:381] copying /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt.baa294b0 -> /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt
	I1009 19:40:36.849786  222294 certs.go:385] copying /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key.baa294b0 -> /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key
	I1009 19:40:36.849849  222294 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.key
	I1009 19:40:36.849867  222294 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.crt with IP's: []
	I1009 19:40:37.298921  222294 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.crt ...
	I1009 19:40:37.298999  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.crt: {Name:mkca984c86b08b305248adb297a20aab8a1d28fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:37.299633  222294 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.key ...
	I1009 19:40:37.299656  222294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.key: {Name:mk4f0ee9c9eafa23baef4df79316a7dd1a4b9d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:40:37.300209  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596.pem (1338 bytes)
	W1009 19:40:37.300257  222294 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596_empty.pem, impossibly tiny 0 bytes
	I1009 19:40:37.300267  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:40:37.300295  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/ca.pem (1078 bytes)
	I1009 19:40:37.300341  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:40:37.300380  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/certs/key.pem (1679 bytes)
	I1009 19:40:37.300429  222294 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem (1708 bytes)
	I1009 19:40:37.301084  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:40:37.326044  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:40:37.351451  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:40:37.377308  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:40:37.402144  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 19:40:37.427319  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:40:37.451420  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:40:37.475834  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/embed-certs-269650/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:40:37.500227  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/ssl/certs/75962.pem --> /usr/share/ca-certificates/75962.pem (1708 bytes)
	I1009 19:40:37.525092  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:40:37.550333  222294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-2290/.minikube/certs/7596.pem --> /usr/share/ca-certificates/7596.pem (1338 bytes)
	I1009 19:40:37.574676  222294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:40:37.593328  222294 ssh_runner.go:195] Run: openssl version
	I1009 19:40:37.598929  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75962.pem && ln -fs /usr/share/ca-certificates/75962.pem /etc/ssl/certs/75962.pem"
	I1009 19:40:37.608742  222294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75962.pem
	I1009 19:40:37.612511  222294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:57 /usr/share/ca-certificates/75962.pem
	I1009 19:40:37.612627  222294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75962.pem
	I1009 19:40:37.620145  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75962.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:40:37.629778  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:40:37.639675  222294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:37.643286  222294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:37.643397  222294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:40:37.650534  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:40:37.660613  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7596.pem && ln -fs /usr/share/ca-certificates/7596.pem /etc/ssl/certs/7596.pem"
	I1009 19:40:37.671299  222294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7596.pem
	I1009 19:40:37.675556  222294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:57 /usr/share/ca-certificates/7596.pem
	I1009 19:40:37.675630  222294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7596.pem
	I1009 19:40:37.683708  222294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7596.pem /etc/ssl/certs/51391683.0"
	I1009 19:40:37.694186  222294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:40:37.697726  222294 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:40:37.697777  222294 kubeadm.go:392] StartCluster: {Name:embed-certs-269650 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-269650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:40:37.697852  222294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1009 19:40:37.697926  222294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:40:37.741300  222294 cri.go:89] found id: ""
	I1009 19:40:37.741372  222294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:40:37.750706  222294 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:40:37.760418  222294 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:40:37.760486  222294 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:40:37.770548  222294 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:40:37.770567  222294 kubeadm.go:157] found existing configuration files:
	
	I1009 19:40:37.770616  222294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:40:37.780213  222294 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:40:37.780332  222294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:40:37.789176  222294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:40:37.801635  222294 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:40:37.801698  222294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:40:37.811415  222294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:40:37.822687  222294 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:40:37.822849  222294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:40:37.833002  222294 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:40:37.847264  222294 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:40:37.847387  222294 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:40:37.858907  222294 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:40:37.908259  222294 kubeadm.go:310] W1009 19:40:37.907520    1045 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:40:37.909014  222294 kubeadm.go:310] W1009 19:40:37.908414    1045 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:40:37.933898  222294 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1009 19:40:38.000935  222294 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:40:45.319566  212114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:40:45.334167  212114 api_server.go:72] duration metric: took 5m58.486701724s to wait for apiserver process to appear ...
	I1009 19:40:45.334193  212114 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:40:45.334230  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:40:45.334291  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:40:45.404349  212114 cri.go:89] found id: "a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:45.404374  212114 cri.go:89] found id: "4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:45.404379  212114 cri.go:89] found id: ""
	I1009 19:40:45.404394  212114 logs.go:282] 2 containers: [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797]
	I1009 19:40:45.404454  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.408983  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.413035  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1009 19:40:45.413099  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:40:45.466616  212114 cri.go:89] found id: "5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:45.466636  212114 cri.go:89] found id: "1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:45.466641  212114 cri.go:89] found id: ""
	I1009 19:40:45.466651  212114 logs.go:282] 2 containers: [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f]
	I1009 19:40:45.466707  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.470602  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.474342  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1009 19:40:45.474415  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:40:45.536508  212114 cri.go:89] found id: "63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:45.536533  212114 cri.go:89] found id: "84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:45.536539  212114 cri.go:89] found id: ""
	I1009 19:40:45.536547  212114 logs.go:282] 2 containers: [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4]
	I1009 19:40:45.536606  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.542255  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.546144  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:40:45.546224  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:40:45.610102  212114 cri.go:89] found id: "855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:45.610126  212114 cri.go:89] found id: "187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:45.610131  212114 cri.go:89] found id: ""
	I1009 19:40:45.610138  212114 logs.go:282] 2 containers: [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e]
	I1009 19:40:45.610196  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.614464  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.618470  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:40:45.618545  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:40:45.669697  212114 cri.go:89] found id: "4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:45.669720  212114 cri.go:89] found id: "e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:45.669726  212114 cri.go:89] found id: ""
	I1009 19:40:45.669733  212114 logs.go:282] 2 containers: [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e]
	I1009 19:40:45.669794  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.676366  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.680603  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:40:45.680692  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:40:45.749148  212114 cri.go:89] found id: "a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:45.749173  212114 cri.go:89] found id: "d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:45.749178  212114 cri.go:89] found id: ""
	I1009 19:40:45.749185  212114 logs.go:282] 2 containers: [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4]
	I1009 19:40:45.749242  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.753756  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.757836  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1009 19:40:45.757914  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:40:45.935969  212114 cri.go:89] found id: "c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:45.935996  212114 cri.go:89] found id: "c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:45.936001  212114 cri.go:89] found id: ""
	I1009 19:40:45.936008  212114 logs.go:282] 2 containers: [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c]
	I1009 19:40:45.936065  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.940437  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:45.953098  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 19:40:45.953178  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 19:40:46.012160  212114 cri.go:89] found id: "ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:46.012185  212114 cri.go:89] found id: ""
	I1009 19:40:46.012193  212114 logs.go:282] 1 containers: [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60]
	I1009 19:40:46.012253  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.016624  212114 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1009 19:40:46.016713  212114 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 19:40:46.083357  212114 cri.go:89] found id: "70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:46.083383  212114 cri.go:89] found id: "932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:46.083388  212114 cri.go:89] found id: ""
	I1009 19:40:46.083396  212114 logs.go:282] 2 containers: [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f]
	I1009 19:40:46.083456  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.087674  212114 ssh_runner.go:195] Run: which crictl
	I1009 19:40:46.091613  212114 logs.go:123] Gathering logs for storage-provisioner [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c] ...
	I1009 19:40:46.091642  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c"
	I1009 19:40:46.144851  212114 logs.go:123] Gathering logs for etcd [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a] ...
	I1009 19:40:46.144890  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a"
	I1009 19:40:46.214756  212114 logs.go:123] Gathering logs for coredns [84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4] ...
	I1009 19:40:46.214788  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4"
	I1009 19:40:46.268020  212114 logs.go:123] Gathering logs for kube-scheduler [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd] ...
	I1009 19:40:46.268058  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd"
	I1009 19:40:46.318517  212114 logs.go:123] Gathering logs for kube-scheduler [187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e] ...
	I1009 19:40:46.318547  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e"
	I1009 19:40:46.374095  212114 logs.go:123] Gathering logs for kube-controller-manager [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184] ...
	I1009 19:40:46.374136  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184"
	I1009 19:40:46.460506  212114 logs.go:123] Gathering logs for kindnet [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9] ...
	I1009 19:40:46.460541  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9"
	I1009 19:40:46.524819  212114 logs.go:123] Gathering logs for kindnet [c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c] ...
	I1009 19:40:46.524849  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c"
	I1009 19:40:46.583620  212114 logs.go:123] Gathering logs for kubelet ...
	I1009 19:40:46.583702  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 19:40:46.644048  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141088     655 reflector.go:138] object-"kube-system"/"kube-proxy-token-l28zw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-l28zw" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644351  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141226     655 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644618  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141301     655 reflector.go:138] object-"kube-system"/"kindnet-token-ch425": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ch425" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.644879  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141420     655 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645133  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.141526     655 reflector.go:138] object-"kube-system"/"coredns-token-x8mx9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-x8mx9" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645400  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.145947     655 reflector.go:138] object-"kube-system"/"metrics-server-token-n5d8s": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-n5d8s" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645645  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146058     655 reflector.go:138] object-"default"/"default-token-dtkng": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-dtkng" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.645924  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:03 old-k8s-version-135957 kubelet[655]: E1009 19:35:03.146124     655 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8fzd6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8fzd6" is forbidden: User "system:node:old-k8s-version-135957" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-135957' and this object
	W1009 19:40:46.654238  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.568696     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.655773  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:05 old-k8s-version-135957 kubelet[655]: E1009 19:35:05.908103     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.658740  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:18 old-k8s-version-135957 kubelet[655]: E1009 19:35:18.807162     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.660983  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:27 old-k8s-version-135957 kubelet[655]: E1009 19:35:27.017732     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.661371  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:28 old-k8s-version-135957 kubelet[655]: E1009 19:35:28.030088     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.661645  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:29 old-k8s-version-135957 kubelet[655]: E1009 19:35:29.798736     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.662027  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:34 old-k8s-version-135957 kubelet[655]: E1009 19:35:34.224840     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.662893  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:36 old-k8s-version-135957 kubelet[655]: E1009 19:35:36.070564     655 pod_workers.go:191] Error syncing pod dbfd3538-0cb4-4cf0-b208-e18c725f6d5d ("storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(dbfd3538-0cb4-4cf0-b208-e18c725f6d5d)"
	W1009 19:40:46.665881  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:40 old-k8s-version-135957 kubelet[655]: E1009 19:35:40.810507     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.666940  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:48 old-k8s-version-135957 kubelet[655]: E1009 19:35:48.113466     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.667468  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.225991     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.667683  212114 logs.go:138] Found kubelet problem: Oct 09 19:35:54 old-k8s-version-135957 kubelet[655]: E1009 19:35:54.805911     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.667944  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:07 old-k8s-version-135957 kubelet[655]: E1009 19:36:07.794321     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.668632  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:10 old-k8s-version-135957 kubelet[655]: E1009 19:36:10.183835     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.669178  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:14 old-k8s-version-135957 kubelet[655]: E1009 19:36:14.226874     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.669376  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:20 old-k8s-version-135957 kubelet[655]: E1009 19:36:20.794329     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.669743  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:29 old-k8s-version-135957 kubelet[655]: E1009 19:36:29.793939     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.672283  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:33 old-k8s-version-135957 kubelet[655]: E1009 19:36:33.803424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.672629  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:41 old-k8s-version-135957 kubelet[655]: E1009 19:36:41.794984     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.672973  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:44 old-k8s-version-135957 kubelet[655]: E1009 19:36:44.794752     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.673626  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.299505     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.673854  212114 logs.go:138] Found kubelet problem: Oct 09 19:36:57 old-k8s-version-135957 kubelet[655]: E1009 19:36:57.794264     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.674216  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:04 old-k8s-version-135957 kubelet[655]: E1009 19:37:04.225263     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.674431  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:10 old-k8s-version-135957 kubelet[655]: E1009 19:37:10.794424     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.674804  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:19 old-k8s-version-135957 kubelet[655]: E1009 19:37:19.794014     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.675040  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:25 old-k8s-version-135957 kubelet[655]: E1009 19:37:25.794260     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.675400  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:33 old-k8s-version-135957 kubelet[655]: E1009 19:37:33.794463     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.675661  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:36 old-k8s-version-135957 kubelet[655]: E1009 19:37:36.794403     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.676031  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:46 old-k8s-version-135957 kubelet[655]: E1009 19:37:46.794633     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.676261  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:48 old-k8s-version-135957 kubelet[655]: E1009 19:37:48.796873     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.676632  212114 logs.go:138] Found kubelet problem: Oct 09 19:37:59 old-k8s-version-135957 kubelet[655]: E1009 19:37:59.794479     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.679482  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:03 old-k8s-version-135957 kubelet[655]: E1009 19:38:03.802412     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1009 19:40:46.679873  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:12 old-k8s-version-135957 kubelet[655]: E1009 19:38:12.799928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.680105  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:15 old-k8s-version-135957 kubelet[655]: E1009 19:38:15.794598     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.680775  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:27 old-k8s-version-135957 kubelet[655]: E1009 19:38:27.530323     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.680993  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:28 old-k8s-version-135957 kubelet[655]: E1009 19:38:28.795022     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.681361  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:34 old-k8s-version-135957 kubelet[655]: E1009 19:38:34.226016     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.681590  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:39 old-k8s-version-135957 kubelet[655]: E1009 19:38:39.794229     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.681999  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:49 old-k8s-version-135957 kubelet[655]: E1009 19:38:49.795059     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.682230  212114 logs.go:138] Found kubelet problem: Oct 09 19:38:51 old-k8s-version-135957 kubelet[655]: E1009 19:38:51.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.682621  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.793928     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.682835  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:03 old-k8s-version-135957 kubelet[655]: E1009 19:39:03.795042     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.683069  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:16 old-k8s-version-135957 kubelet[655]: E1009 19:39:16.794422     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.683468  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:18 old-k8s-version-135957 kubelet[655]: E1009 19:39:18.794102     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.683703  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:27 old-k8s-version-135957 kubelet[655]: E1009 19:39:27.794317     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.684106  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: E1009 19:39:30.794130     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.684324  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:42 old-k8s-version-135957 kubelet[655]: E1009 19:39:42.794279     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.684724  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: E1009 19:39:43.794072     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.685423  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.806354     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.686352  212114 logs.go:138] Found kubelet problem: Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.812240     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.687361  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.687730  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.687961  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.688346  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.688589  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:46.688999  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:46.692433  212114 logs.go:138] Found kubelet problem: Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:46.692450  212114 logs.go:123] Gathering logs for kube-apiserver [4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797] ...
	I1009 19:40:46.692464  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797"
	I1009 19:40:46.775228  212114 logs.go:123] Gathering logs for etcd [1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f] ...
	I1009 19:40:46.775261  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f"
	I1009 19:40:46.843804  212114 logs.go:123] Gathering logs for kube-controller-manager [d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4] ...
	I1009 19:40:46.843886  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4"
	I1009 19:40:46.910698  212114 logs.go:123] Gathering logs for containerd ...
	I1009 19:40:46.910734  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1009 19:40:46.996012  212114 logs.go:123] Gathering logs for dmesg ...
	I1009 19:40:46.996052  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:40:47.015084  212114 logs.go:123] Gathering logs for kube-apiserver [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880] ...
	I1009 19:40:47.015112  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880"
	I1009 19:40:47.107848  212114 logs.go:123] Gathering logs for kubernetes-dashboard [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60] ...
	I1009 19:40:47.107928  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60"
	I1009 19:40:47.154334  212114 logs.go:123] Gathering logs for storage-provisioner [932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f] ...
	I1009 19:40:47.154412  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f"
	I1009 19:40:47.203396  212114 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:40:47.203474  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 19:40:47.462255  212114 logs.go:123] Gathering logs for coredns [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33] ...
	I1009 19:40:47.462327  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33"
	I1009 19:40:47.515099  212114 logs.go:123] Gathering logs for kube-proxy [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454] ...
	I1009 19:40:47.515182  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454"
	I1009 19:40:47.569213  212114 logs.go:123] Gathering logs for kube-proxy [e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e] ...
	I1009 19:40:47.569289  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e"
	I1009 19:40:47.620464  212114 logs.go:123] Gathering logs for container status ...
	I1009 19:40:47.620545  212114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:40:47.675639  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:47.675797  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 19:40:47.675885  212114 out.go:270] X Problems detected in kubelet:
	W1009 19:40:47.675951  212114 out.go:270]   Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:47.675995  212114 out.go:270]   Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:47.676047  212114 out.go:270]   Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1009 19:40:47.676086  212114 out.go:270]   Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	W1009 19:40:47.676137  212114 out.go:270]   Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1009 19:40:47.676196  212114 out.go:358] Setting ErrFile to fd 2...
	I1009 19:40:47.676228  212114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:40:56.579286  222294 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 19:40:56.579351  222294 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 19:40:56.579441  222294 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:40:56.579500  222294 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1009 19:40:56.579538  222294 kubeadm.go:310] OS: Linux
	I1009 19:40:56.579587  222294 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 19:40:56.579639  222294 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 19:40:56.579693  222294 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 19:40:56.579743  222294 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 19:40:56.579794  222294 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 19:40:56.579845  222294 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 19:40:56.579893  222294 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 19:40:56.579943  222294 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 19:40:56.579993  222294 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 19:40:56.580069  222294 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:40:56.580166  222294 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:40:56.580258  222294 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:40:56.580322  222294 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:40:56.581979  222294 out.go:235]   - Generating certificates and keys ...
	I1009 19:40:56.582063  222294 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 19:40:56.582133  222294 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 19:40:56.582205  222294 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:40:56.582279  222294 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:40:56.582342  222294 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:40:56.582396  222294 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 19:40:56.582452  222294 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 19:40:56.582575  222294 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-269650 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:40:56.582631  222294 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 19:40:56.582752  222294 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-269650 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1009 19:40:56.582819  222294 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:40:56.582883  222294 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:40:56.582930  222294 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 19:40:56.582988  222294 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:40:56.583042  222294 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:40:56.583104  222294 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:40:56.583162  222294 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:40:56.583231  222294 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:40:56.583288  222294 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:40:56.583375  222294 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:40:56.583444  222294 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:40:56.584906  222294 out.go:235]   - Booting up control plane ...
	I1009 19:40:56.585049  222294 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:40:56.585141  222294 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:40:56.585236  222294 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:40:56.585379  222294 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:40:56.585479  222294 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:40:56.585523  222294 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 19:40:56.585671  222294 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:40:56.585784  222294 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:40:56.585849  222294 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501952706s
	I1009 19:40:56.585926  222294 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 19:40:56.585992  222294 kubeadm.go:310] [api-check] The API server is healthy after 7.001446567s
	I1009 19:40:56.586100  222294 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:40:56.586226  222294 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:40:56.586287  222294 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:40:56.586473  222294 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-269650 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:40:56.586532  222294 kubeadm.go:310] [bootstrap-token] Using token: 8l7pj7.fik8jhdqoqzcp1rj
	I1009 19:40:56.588083  222294 out.go:235]   - Configuring RBAC rules ...
	I1009 19:40:56.588196  222294 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:40:56.588292  222294 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:40:56.588434  222294 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:40:56.588582  222294 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:40:56.588734  222294 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:40:56.588825  222294 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:40:56.588947  222294 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:40:56.588993  222294 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 19:40:56.589043  222294 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 19:40:56.589053  222294 kubeadm.go:310] 
	I1009 19:40:56.589119  222294 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 19:40:56.589128  222294 kubeadm.go:310] 
	I1009 19:40:56.589204  222294 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 19:40:56.589213  222294 kubeadm.go:310] 
	I1009 19:40:56.589238  222294 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 19:40:56.589299  222294 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:40:56.589351  222294 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:40:56.589358  222294 kubeadm.go:310] 
	I1009 19:40:56.589412  222294 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 19:40:56.589420  222294 kubeadm.go:310] 
	I1009 19:40:56.589467  222294 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:40:56.589474  222294 kubeadm.go:310] 
	I1009 19:40:56.589526  222294 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 19:40:56.589603  222294 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:40:56.589674  222294 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:40:56.589682  222294 kubeadm.go:310] 
	I1009 19:40:56.589765  222294 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:40:56.589844  222294 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 19:40:56.589851  222294 kubeadm.go:310] 
	I1009 19:40:56.589933  222294 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8l7pj7.fik8jhdqoqzcp1rj \
	I1009 19:40:56.590038  222294 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:46ecff2404792e73c0fde7b74431755068cf24bba8856a1cc3cbe480cfe7ea71 \
	I1009 19:40:56.590062  222294 kubeadm.go:310] 	--control-plane 
	I1009 19:40:56.590071  222294 kubeadm.go:310] 
	I1009 19:40:56.590155  222294 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:40:56.590163  222294 kubeadm.go:310] 
	I1009 19:40:56.590246  222294 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8l7pj7.fik8jhdqoqzcp1rj \
	I1009 19:40:56.590363  222294 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:46ecff2404792e73c0fde7b74431755068cf24bba8856a1cc3cbe480cfe7ea71 
	I1009 19:40:56.590375  222294 cni.go:84] Creating CNI manager for ""
	I1009 19:40:56.590382  222294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 19:40:56.592324  222294 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 19:40:57.677635  212114 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1009 19:40:57.689852  212114 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1009 19:40:57.692719  212114 out.go:201] 
	W1009 19:40:57.694057  212114 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1009 19:40:57.694098  212114 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1009 19:40:57.694120  212114 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1009 19:40:57.694126  212114 out.go:270] * 
	W1009 19:40:57.695133  212114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:40:57.697281  212114 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	2635990d5e8d9       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   6be23148e49af       dashboard-metrics-scraper-8d5bb5db8-hjq4h
	70879940cc769       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   1de6e43eebf8b       storage-provisioner
	ea122f5ea8fc6       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   25f7c96003e73       kubernetes-dashboard-cd95d586-5fpql
	63cf0a429f88f       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   15283005bc80d       coredns-74ff55c5b-txr44
	c26ad772ee958       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                 1                   7e7bef3881ee5       kindnet-5rbc6
	4d322977adb34       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   554d6af6d02c2       kube-proxy-whqjp
	33227ebf9aad7       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   903879911acc8       busybox
	932274f4fc040       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   1de6e43eebf8b       storage-provisioner
	a72ce95e2b81f       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   d80a8b050e9b7       kube-controller-manager-old-k8s-version-135957
	5b0aed246c27a       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   f93781e502a30       etcd-old-k8s-version-135957
	a86cc728bc887       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   8d6ebdc951d62       kube-apiserver-old-k8s-version-135957
	855986d2034d9       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   5c03bf2072449       kube-scheduler-old-k8s-version-135957
	2e87a804ea0c4       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   c4b496d9d9b18       busybox
	84036bbee69a2       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   17c4d6a21c1f4       coredns-74ff55c5b-txr44
	c3976d1ea15f2       0bcd66b03df5f       8 minutes ago       Exited              kindnet-cni                 0                   dec6672f2aa64       kindnet-5rbc6
	e8859b886635d       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   54d3ec4654041       kube-proxy-whqjp
	d992b690b4b29       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   c708a812c31ac       kube-controller-manager-old-k8s-version-135957
	187e30802bad2       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   b26fe1a489965       kube-scheduler-old-k8s-version-135957
	1c8e2d58066cf       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   babf9c5bfb80f       etcd-old-k8s-version-135957
	4ad7bf6ad0e52       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   91b8411ee1710       kube-apiserver-old-k8s-version-135957
	
	
	==> containerd <==
	Oct 09 19:36:56 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:56.824928819Z" level=info msg="StartContainer for \"f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4\""
	Oct 09 19:36:56 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:56.921023735Z" level=info msg="StartContainer for \"f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4\" returns successfully"
	Oct 09 19:36:56 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:56.955643088Z" level=info msg="shim disconnected" id=f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4 namespace=k8s.io
	Oct 09 19:36:56 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:56.955708457Z" level=warning msg="cleaning up after shim disconnected" id=f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4 namespace=k8s.io
	Oct 09 19:36:56 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:56.955719164Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 09 19:36:57 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:57.298642213Z" level=info msg="RemoveContainer for \"6db0c2b67644e8f4508105525c5edf8a7ad5b746e42940cc51eade34a7afa6de\""
	Oct 09 19:36:57 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:36:57.304346501Z" level=info msg="RemoveContainer for \"6db0c2b67644e8f4508105525c5edf8a7ad5b746e42940cc51eade34a7afa6de\" returns successfully"
	Oct 09 19:38:03 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:03.794489427Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:38:03 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:03.800164186Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 09 19:38:03 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:03.801889665Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 09 19:38:03 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:03.801980117Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.797241040Z" level=info msg="CreateContainer within sandbox \"6be23148e49af1776409ef15eb37d2244e83f4904f75761ade1e8031f85793a5\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.817117869Z" level=info msg="CreateContainer within sandbox \"6be23148e49af1776409ef15eb37d2244e83f4904f75761ade1e8031f85793a5\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e\""
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.817787009Z" level=info msg="StartContainer for \"2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e\""
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.894629772Z" level=info msg="StartContainer for \"2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e\" returns successfully"
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.921083479Z" level=info msg="shim disconnected" id=2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e namespace=k8s.io
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.921143704Z" level=warning msg="cleaning up after shim disconnected" id=2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e namespace=k8s.io
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.921154633Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 09 19:38:26 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:26.938721207Z" level=warning msg="cleanup warnings time=\"2024-10-09T19:38:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Oct 09 19:38:27 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:27.531417930Z" level=info msg="RemoveContainer for \"f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4\""
	Oct 09 19:38:27 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:38:27.535776571Z" level=info msg="RemoveContainer for \"f40d86867942c0c44ac19f02d48e43abc219e3e13771f3c8586222ed29129ad4\" returns successfully"
	Oct 09 19:40:54 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:40:54.795185283Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:54 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:40:54.802911037Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 09 19:40:54 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:40:54.804352764Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 09 19:40:54 old-k8s-version-135957 containerd[566]: time="2024-10-09T19:40:54.804469791Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [63cf0a429f88fba3145d366c1fcc23f5cfdebced0ea1be2236fca2f81829cb33] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57611 - 7565 "HINFO IN 9088653211537284142.8595202348181463480. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014879514s
	
	
	==> coredns [84036bbee69a29ef082da3b3087d897adbb11db9bdadc90f36135bd91a8b88e4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:36419 - 7371 "HINFO IN 3087926151228416142.221234552813058234. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.043555675s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-135957
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-135957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=old-k8s-version-135957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T19_32_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:32:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-135957
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:40:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:36:03 +0000   Wed, 09 Oct 2024 19:32:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:36:03 +0000   Wed, 09 Oct 2024 19:32:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:36:03 +0000   Wed, 09 Oct 2024 19:32:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:36:03 +0000   Wed, 09 Oct 2024 19:32:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-135957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 bac715afa91442c9a804b44a38c50d28
	  System UUID:                ca6776eb-7805-4d48-8777-68103ae2f9fd
	  Boot ID:                    82386538-14d4-4a77-b4cb-0988d545cff7
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  kube-system                 coredns-74ff55c5b-txr44                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m22s
	  kube-system                 etcd-old-k8s-version-135957                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m29s
	  kube-system                 kindnet-5rbc6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m22s
	  kube-system                 kube-apiserver-old-k8s-version-135957             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-controller-manager-old-k8s-version-135957    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-whqjp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-old-k8s-version-135957             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 metrics-server-9975d5f86-jcsl5                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m33s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-hjq4h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-5fpql               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m49s (x5 over 8m49s)  kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s (x4 over 8m49s)  kubelet     Node old-k8s-version-135957 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s (x4 over 8m49s)  kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet     Node old-k8s-version-135957 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s                  kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m22s                  kubelet     Node old-k8s-version-135957 status is now: NodeReady
	  Normal  Starting                 8m21s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m5s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)    kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)    kubelet     Node old-k8s-version-135957 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x7 over 6m5s)    kubelet     Node old-k8s-version-135957 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m54s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct 9 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015212] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.462139] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.053294] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014996] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.652682] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.112018] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 9 19:26] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1c8e2d58066cf86a2a21f63512a26645c4aca5e9c0e35ab5524cf050d204d08f] <==
	raft2024/10/09 19:32:11 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/10/09 19:32:11 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/09 19:32:11 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/09 19:32:11 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/09 19:32:11 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-09 19:32:11.423227 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-09 19:32:11.423494 I | etcdserver: published {Name:old-k8s-version-135957 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-09 19:32:11.423692 I | embed: ready to serve client requests
	2024-10-09 19:32:11.433298 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-09 19:32:11.433574 I | embed: ready to serve client requests
	2024-10-09 19:32:11.434880 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-09 19:32:11.455668 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-09 19:32:11.455933 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-09 19:32:36.623352 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:32:38.133096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:32:48.132755 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:32:58.132872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:08.132958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:18.132782 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:28.132940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:38.132731 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:48.132906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:33:58.132735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:34:08.132911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:34:18.133075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5b0aed246c27ad7c1ddcacdb468f3f044ce18481900112163a8cf9ed5eea809a] <==
	2024-10-09 19:36:58.951121 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:08.950366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:18.950189 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:28.950193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:38.950333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:48.950333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:37:58.950396 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:08.950202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:18.950273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:28.950168 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:38.950169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:48.950075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:38:58.950107 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:08.950141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:18.950186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:28.950151 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:38.950097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:48.950271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:39:58.950101 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:08.950546 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:18.950181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:28.950077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:38.950294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:48.950172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-09 19:40:58.950787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:40:59 up  1:23,  0 users,  load average: 2.82, 2.12, 2.42
	Linux old-k8s-version-135957 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c26ad772ee958513c626214202493556f6ebc49584722d4a63b9e146a275c4d9] <==
	I1009 19:38:57.710276       1 main.go:300] handling current node
	I1009 19:39:07.701781       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:07.701814       1 main.go:300] handling current node
	I1009 19:39:17.708738       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:17.708774       1 main.go:300] handling current node
	I1009 19:39:27.701688       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:27.701729       1 main.go:300] handling current node
	I1009 19:39:37.702012       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:37.702044       1 main.go:300] handling current node
	I1009 19:39:47.709788       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:47.709824       1 main.go:300] handling current node
	I1009 19:39:57.709633       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:39:57.709668       1 main.go:300] handling current node
	I1009 19:40:07.702069       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:07.702106       1 main.go:300] handling current node
	I1009 19:40:17.710294       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:17.710331       1 main.go:300] handling current node
	I1009 19:40:27.708796       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:27.709069       1 main.go:300] handling current node
	I1009 19:40:37.702188       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:37.702222       1 main.go:300] handling current node
	I1009 19:40:47.708720       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:47.708756       1 main.go:300] handling current node
	I1009 19:40:57.710548       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:40:57.710621       1 main.go:300] handling current node
	
	
	==> kindnet [c3976d1ea15f2ff131facc548ade3a7c232dbda5819961fc0b4a1d787965c66c] <==
	I1009 19:32:40.901779       1 controller.go:342] Waiting for informer caches to sync
	I1009 19:32:40.901786       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1009 19:32:41.202338       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1009 19:32:41.202377       1 metrics.go:61] Registering metrics
	I1009 19:32:41.202616       1 controller.go:378] Syncing nftables rules
	I1009 19:32:50.909243       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:32:50.909312       1 main.go:300] handling current node
	I1009 19:33:00.901959       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:00.901995       1 main.go:300] handling current node
	I1009 19:33:10.907781       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:10.907815       1 main.go:300] handling current node
	I1009 19:33:20.907301       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:20.907336       1 main.go:300] handling current node
	I1009 19:33:30.902215       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:30.902248       1 main.go:300] handling current node
	I1009 19:33:40.902156       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:40.902218       1 main.go:300] handling current node
	I1009 19:33:50.907771       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:33:50.907804       1 main.go:300] handling current node
	I1009 19:34:00.909325       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:34:00.909358       1 main.go:300] handling current node
	I1009 19:34:10.901683       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:34:10.901716       1 main.go:300] handling current node
	I1009 19:34:20.905217       1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
	I1009 19:34:20.905312       1 main.go:300] handling current node
	
	
	==> kube-apiserver [4ad7bf6ad0e5265c4a9fd21e84e1e1af06eeff17564ed8007acfde961022a797] <==
	I1009 19:32:19.524589       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1009 19:32:19.540607       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1009 19:32:19.548536       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1009 19:32:19.548566       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1009 19:32:20.040936       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 19:32:20.095222       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1009 19:32:20.219864       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1009 19:32:20.221073       1 controller.go:606] quota admission added evaluator for: endpoints
	I1009 19:32:20.226209       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:32:20.563538       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 19:32:21.289998       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1009 19:32:21.757328       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1009 19:32:21.813892       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1009 19:32:37.220251       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1009 19:32:37.385010       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1009 19:32:51.024175       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:32:51.024215       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:32:51.024224       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1009 19:33:23.317564       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:33:23.317630       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:33:23.317639       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1009 19:34:00.577725       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:34:00.577803       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:34:00.577814       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E1009 19:34:25.966579       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-apiserver [a86cc728bc8874c68b3d2f49c664827c126a21d2a5bdecc230cc8b936b97b880] <==
	I1009 19:37:33.902863       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:37:33.902873       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1009 19:38:05.652284       1 handler_proxy.go:102] no RequestInfo found in the context
	E1009 19:38:05.652362       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1009 19:38:05.652373       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 19:38:09.226371       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:38:09.226412       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:38:09.226421       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1009 19:38:41.230084       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:38:41.230126       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:38:41.230135       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1009 19:39:16.475051       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:39:16.475093       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:39:16.475102       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1009 19:39:53.152599       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:39:53.152678       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:39:53.152712       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1009 19:40:04.234979       1 handler_proxy.go:102] no RequestInfo found in the context
	E1009 19:40:04.235056       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1009 19:40:04.235071       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 19:40:35.028430       1 client.go:360] parsed scheme: "passthrough"
	I1009 19:40:35.028483       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1009 19:40:35.028492       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [a72ce95e2b81fd5c264bda698f6f4d1b0c484887be8796402a59c8bd18d9e184] <==
	E1009 19:36:52.314054       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:36:57.907397       1 request.go:655] Throttling request took 1.048392656s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:36:58.758796       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:37:22.815852       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:37:30.409320       1 request.go:655] Throttling request took 1.04836041s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:37:31.260810       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:37:53.317732       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:38:02.911389       1 request.go:655] Throttling request took 1.048530451s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:38:03.762803       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:38:23.819833       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:38:35.413380       1 request.go:655] Throttling request took 1.048261533s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:38:36.266504       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:38:54.321694       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:39:07.916953       1 request.go:655] Throttling request took 1.048490434s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1009 19:39:08.768401       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:39:24.823508       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:39:40.418854       1 request.go:655] Throttling request took 1.048253073s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:39:41.270205       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:39:55.402783       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:40:12.920597       1 request.go:655] Throttling request took 1.048307103s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:40:13.771971       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:40:25.904789       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1009 19:40:45.422316       1 request.go:655] Throttling request took 1.04763807s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1009 19:40:46.274236       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 19:40:56.480518       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [d992b690b4b295494e1d4053488cd50d6b5bb5c448447642a979dacd9803d0b4] <==
	I1009 19:32:37.466449       1 shared_informer.go:247] Caches are synced for resource quota 
	I1009 19:32:37.466689       1 shared_informer.go:247] Caches are synced for taint 
	I1009 19:32:37.467445       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W1009 19:32:37.467694       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-135957. Assuming now as a timestamp.
	I1009 19:32:37.467963       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1009 19:32:37.468434       1 event.go:291] "Event occurred" object="old-k8s-version-135957" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-135957 event: Registered Node old-k8s-version-135957 in Controller"
	I1009 19:32:37.466702       1 shared_informer.go:247] Caches are synced for PV protection 
	I1009 19:32:37.468700       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1009 19:32:37.467202       1 shared_informer.go:247] Caches are synced for expand 
	I1009 19:32:37.499885       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1009 19:32:37.521259       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-whqjp"
	I1009 19:32:37.522484       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-txr44"
	I1009 19:32:37.526853       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-135957" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1009 19:32:37.537332       1 shared_informer.go:247] Caches are synced for attach detach 
	I1009 19:32:37.624979       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1009 19:32:37.912168       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1009 19:32:37.912188       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1009 19:32:37.925245       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1009 19:32:38.112056       1 request.go:655] Throttling request took 1.047256334s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	I1009 19:32:38.931858       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I1009 19:32:38.932034       1 shared_informer.go:247] Caches are synced for resource quota 
	I1009 19:32:38.983823       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1009 19:32:39.018562       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-mhkn2"
	I1009 19:32:42.468351       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1009 19:34:25.495175       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [4d322977adb34505800303fa80faeb24ab7f9ebaab257bc66df4746a1d721454] <==
	I1009 19:35:05.725219       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1009 19:35:05.725361       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1009 19:35:05.761099       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1009 19:35:05.761287       1 server_others.go:185] Using iptables Proxier.
	I1009 19:35:05.761641       1 server.go:650] Version: v1.20.0
	I1009 19:35:05.762315       1 config.go:315] Starting service config controller
	I1009 19:35:05.762420       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1009 19:35:05.762505       1 config.go:224] Starting endpoint slice config controller
	I1009 19:35:05.762577       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1009 19:35:05.862626       1 shared_informer.go:247] Caches are synced for service config 
	I1009 19:35:05.862681       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [e8859b886635d3afbca91152808b9f332358001778bf0d7aa5f21759a09bb89e] <==
	I1009 19:32:38.431017       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1009 19:32:38.431104       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1009 19:32:38.459385       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1009 19:32:38.459471       1 server_others.go:185] Using iptables Proxier.
	I1009 19:32:38.459752       1 server.go:650] Version: v1.20.0
	I1009 19:32:38.460244       1 config.go:315] Starting service config controller
	I1009 19:32:38.460252       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1009 19:32:38.460959       1 config.go:224] Starting endpoint slice config controller
	I1009 19:32:38.460966       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1009 19:32:38.560381       1 shared_informer.go:247] Caches are synced for service config 
	I1009 19:32:38.564839       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [187e30802bad2aaeab65192a456cb8381d5f88344e4afb2f53890a5b849ebf1e] <==
	I1009 19:32:16.023033       1 serving.go:331] Generated self-signed cert in-memory
	W1009 19:32:18.813947       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:32:18.814092       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:32:18.814156       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:32:18.814197       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:32:18.874169       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1009 19:32:18.879708       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1009 19:32:18.879846       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:32:18.883346       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 19:32:18.887212       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:32:18.887726       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:32:18.887926       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 19:32:18.888174       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 19:32:18.888836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:32:18.895416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 19:32:18.895526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:32:18.895582       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 19:32:18.895714       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:32:18.895760       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 19:32:18.895823       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 19:32:18.897413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:32:19.720273       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 19:32:19.807119       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:32:19.901047       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1009 19:32:21.683686       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [855986d2034d90a9fde6bf1a7e0380afe758f230033029f7ba587777307ccadd] <==
	I1009 19:34:57.568329       1 serving.go:331] Generated self-signed cert in-memory
	W1009 19:35:02.842378       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 19:35:02.847200       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 19:35:02.847956       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 19:35:02.848201       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 19:35:03.280703       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:35:03.280744       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 19:35:03.281531       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1009 19:35:03.281571       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1009 19:35:03.381173       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 09 19:39:27 old-k8s-version-135957 kubelet[655]: E1009 19:39:27.794317     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: I1009 19:39:30.793735     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:39:30 old-k8s-version-135957 kubelet[655]: E1009 19:39:30.794130     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:39:42 old-k8s-version-135957 kubelet[655]: E1009 19:39:42.794279     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: I1009 19:39:43.793569     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:39:43 old-k8s-version-135957 kubelet[655]: E1009 19:39:43.794072     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: I1009 19:39:56.805620     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.806354     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:39:56 old-k8s-version-135957 kubelet[655]: E1009 19:39:56.812240     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:07 old-k8s-version-135957 kubelet[655]: E1009 19:40:07.793973     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: I1009 19:40:11.793594     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:40:11 old-k8s-version-135957 kubelet[655]: E1009 19:40:11.794417     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:40:18 old-k8s-version-135957 kubelet[655]: E1009 19:40:18.794442     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: I1009 19:40:23.793618     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:40:23 old-k8s-version-135957 kubelet[655]: E1009 19:40:23.793992     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:40:30 old-k8s-version-135957 kubelet[655]: E1009 19:40:30.794850     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: I1009 19:40:37.793548     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:40:37 old-k8s-version-135957 kubelet[655]: E1009 19:40:37.793924     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:40:43 old-k8s-version-135957 kubelet[655]: E1009 19:40:43.794198     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 19:40:49 old-k8s-version-135957 kubelet[655]: I1009 19:40:49.793621     655 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2635990d5e8d9417e598217568baa18a891d68e348b67e4337d159507afccb3e
	Oct 09 19:40:49 old-k8s-version-135957 kubelet[655]: E1009 19:40:49.793982     655 pod_workers.go:191] Error syncing pod f1441c31-5795-4d25-a253-1d407ce48354 ("dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hjq4h_kubernetes-dashboard(f1441c31-5795-4d25-a253-1d407ce48354)"
	Oct 09 19:40:54 old-k8s-version-135957 kubelet[655]: E1009 19:40:54.804705     655 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 09 19:40:54 old-k8s-version-135957 kubelet[655]: E1009 19:40:54.804766     655 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 09 19:40:54 old-k8s-version-135957 kubelet[655]: E1009 19:40:54.804928     655 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-n5d8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-jcsl5_kube-system(5a362d7
7-9bb8-435c-9f86-e1d8bbb32b46): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 09 19:40:54 old-k8s-version-135957 kubelet[655]: E1009 19:40:54.805167     655 pod_workers.go:191] Error syncing pod 5a362d77-9bb8-435c-9f86-e1d8bbb32b46 ("metrics-server-9975d5f86-jcsl5_kube-system(5a362d77-9bb8-435c-9f86-e1d8bbb32b46)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [ea122f5ea8fc6e36a226a62c42afab032c3c1cdec03c58593e3ae44bd5997a60] <==
	2024/10/09 19:35:30 Using namespace: kubernetes-dashboard
	2024/10/09 19:35:30 Using in-cluster config to connect to apiserver
	2024/10/09 19:35:30 Using secret token for csrf signing
	2024/10/09 19:35:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/09 19:35:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/09 19:35:30 Successful initial request to the apiserver, version: v1.20.0
	2024/10/09 19:35:30 Generating JWE encryption key
	2024/10/09 19:35:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/09 19:35:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/09 19:35:31 Initializing JWE encryption key from synchronized object
	2024/10/09 19:35:31 Creating in-cluster Sidecar client
	2024/10/09 19:35:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:35:31 Serving insecurely on HTTP port: 9090
	2024/10/09 19:36:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:36:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:37:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:37:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:38:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:38:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:39:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:39:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:40:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:40:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/09 19:35:30 Starting overwatch
	
	
	==> storage-provisioner [70879940cc7690fd1350f5136b44b129f7292089da7ad5afeabde81878228f4c] <==
	I1009 19:35:49.967221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 19:35:50.036826       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 19:35:50.039130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 19:36:07.550381       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 19:36:07.550825       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135957_69a63abd-7905-40cc-9489-c9d61be1bc57!
	I1009 19:36:07.550537       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"867abbea-7973-464a-a387-d64438a13bad", APIVersion:"v1", ResourceVersion:"833", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-135957_69a63abd-7905-40cc-9489-c9d61be1bc57 became leader
	I1009 19:36:07.651124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-135957_69a63abd-7905-40cc-9489-c9d61be1bc57!
	
	
	==> storage-provisioner [932274f4fc04057b5ff116ef31ac91a9a3bf6b7f70e2c7b9adbb29c3bd1c151f] <==
	I1009 19:35:05.101401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 19:35:35.103751       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135957 -n old-k8s-version-135957
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-135957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-jcsl5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-135957 describe pod metrics-server-9975d5f86-jcsl5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-135957 describe pod metrics-server-9975d5f86-jcsl5: exit status 1 (148.858645ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-jcsl5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-135957 describe pod metrics-server-9975d5f86-jcsl5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.83s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.82
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214.86
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/PullSecret 10.82
34 TestAddons/parallel/Registry 16.84
35 TestAddons/parallel/Ingress 18.23
36 TestAddons/parallel/InspektorGadget 11.87
37 TestAddons/parallel/MetricsServer 6.82
39 TestAddons/parallel/CSI 43.31
40 TestAddons/parallel/Headlamp 16.74
41 TestAddons/parallel/CloudSpanner 5.58
42 TestAddons/parallel/LocalPath 51.52
43 TestAddons/parallel/NvidiaDevicePlugin 6.59
44 TestAddons/parallel/Yakd 11.84
45 TestAddons/StoppedEnableDisable 12.29
46 TestCertOptions 37.81
47 TestCertExpiration 231.6
49 TestForceSystemdFlag 35.72
50 TestForceSystemdEnv 41.36
51 TestDockerEnvContainerd 43.73
56 TestErrorSpam/setup 32.17
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.1
59 TestErrorSpam/pause 1.7
60 TestErrorSpam/unpause 1.84
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.94
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.86
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.06
73 TestFunctional/serial/CacheCmd/cache/add_local 1.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 40.29
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.68
84 TestFunctional/serial/LogsFileCmd 1.73
85 TestFunctional/serial/InvalidService 4.68
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 10.39
89 TestFunctional/parallel/DryRun 0.42
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1
95 TestFunctional/parallel/ServiceCmdConnect 9.67
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 25.26
99 TestFunctional/parallel/SSHCmd 0.67
100 TestFunctional/parallel/CpCmd 2.11
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.14
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
111 TestFunctional/parallel/License 0.31
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.39
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
127 TestFunctional/parallel/ServiceCmd/List 0.63
128 TestFunctional/parallel/MountCmd/any-port 8.2
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
131 TestFunctional/parallel/ServiceCmd/Format 0.51
132 TestFunctional/parallel/ServiceCmd/URL 0.47
133 TestFunctional/parallel/MountCmd/specific-port 1.83
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.61
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.28
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.37
142 TestFunctional/parallel/ImageCommands/Setup 0.93
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 111.05
160 TestMultiControlPlane/serial/DeployApp 46.64
161 TestMultiControlPlane/serial/PingHostFromPods 1.63
162 TestMultiControlPlane/serial/AddWorkerNode 21.43
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
165 TestMultiControlPlane/serial/CopyFile 18.57
166 TestMultiControlPlane/serial/StopSecondaryNode 12.87
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.54
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.33
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 158.79
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.47
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
173 TestMultiControlPlane/serial/StopCluster 36.07
174 TestMultiControlPlane/serial/RestartCluster 40.21
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
176 TestMultiControlPlane/serial/AddSecondaryNode 43.22
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
181 TestJSONOutput/start/Command 56.24
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.74
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.76
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 39.5
207 TestKicCustomNetwork/use_default_bridge_network 31.82
208 TestKicExistingNetwork 30.43
209 TestKicCustomSubnet 33.17
210 TestKicStaticIP 34.87
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 67.54
215 TestMountStart/serial/StartWithMountFirst 6.48
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 9.25
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.51
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 80.73
227 TestMultiNode/serial/DeployApp2Nodes 19.83
228 TestMultiNode/serial/PingHostFrom2Pods 0.98
229 TestMultiNode/serial/AddNode 16.1
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.7
232 TestMultiNode/serial/CopyFile 9.92
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 9.46
235 TestMultiNode/serial/RestartKeepsNodes 80.35
236 TestMultiNode/serial/DeleteNode 5.18
237 TestMultiNode/serial/StopMultiNode 24.06
238 TestMultiNode/serial/RestartMultiNode 48.44
239 TestMultiNode/serial/ValidateNameConflict 32.11
244 TestPreload 127.23
246 TestScheduledStopUnix 109.16
249 TestInsufficientStorage 10.33
250 TestRunningBinaryUpgrade 84.35
252 TestKubernetesUpgrade 105.34
253 TestMissingContainerUpgrade 176.59
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.57
257 TestNoKubernetes/serial/StartWithStopK8s 17.77
258 TestNoKubernetes/serial/Start 5.04
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
260 TestNoKubernetes/serial/ProfileList 0.95
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 7.36
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
264 TestStoppedBinaryUpgrade/Setup 1.04
265 TestStoppedBinaryUpgrade/Upgrade 125.89
274 TestPause/serial/Start 78.29
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
276 TestPause/serial/SecondStartNoReconfiguration 6.7
284 TestNetworkPlugins/group/false 5.27
285 TestPause/serial/Pause 0.96
286 TestPause/serial/VerifyStatus 0.36
287 TestPause/serial/Unpause 0.81
288 TestPause/serial/PauseAgain 1.1
289 TestPause/serial/DeletePaused 2.86
293 TestPause/serial/VerifyDeletedResources 0.17
295 TestStartStop/group/old-k8s-version/serial/FirstStart 155.85
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.68
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.71
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.59
300 TestStartStop/group/old-k8s-version/serial/Stop 12.33
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.6
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.34
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.85
308 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.05
313 TestStartStop/group/embed-certs/serial/FirstStart 65.21
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
317 TestStartStop/group/old-k8s-version/serial/Pause 2.89
319 TestStartStop/group/no-preload/serial/FirstStart 72.41
320 TestStartStop/group/embed-certs/serial/DeployApp 9.47
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.47
322 TestStartStop/group/embed-certs/serial/Stop 12.25
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
324 TestStartStop/group/embed-certs/serial/SecondStart 268.08
325 TestStartStop/group/no-preload/serial/DeployApp 8.36
326 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
327 TestStartStop/group/no-preload/serial/Stop 12.06
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/no-preload/serial/SecondStart 268.62
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.07
335 TestStartStop/group/newest-cni/serial/FirstStart 36.74
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.66
338 TestStartStop/group/newest-cni/serial/Stop 1.29
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 18.29
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
344 TestStartStop/group/no-preload/serial/Pause 4.51
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
348 TestStartStop/group/newest-cni/serial/Pause 3.78
349 TestNetworkPlugins/group/auto/Start 69.88
350 TestNetworkPlugins/group/kindnet/Start 56.77
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
353 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
354 TestNetworkPlugins/group/auto/KubeletFlags 0.35
355 TestNetworkPlugins/group/auto/NetCatPod 8.35
356 TestNetworkPlugins/group/kindnet/DNS 0.2
357 TestNetworkPlugins/group/kindnet/Localhost 0.15
358 TestNetworkPlugins/group/kindnet/HairPin 0.15
359 TestNetworkPlugins/group/auto/DNS 0.24
360 TestNetworkPlugins/group/auto/Localhost 0.15
361 TestNetworkPlugins/group/auto/HairPin 0.15
362 TestNetworkPlugins/group/calico/Start 77.67
363 TestNetworkPlugins/group/custom-flannel/Start 59.72
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
366 TestNetworkPlugins/group/custom-flannel/DNS 0.21
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.41
371 TestNetworkPlugins/group/calico/NetCatPod 11.34
372 TestNetworkPlugins/group/enable-default-cni/Start 52.78
373 TestNetworkPlugins/group/calico/DNS 0.24
374 TestNetworkPlugins/group/calico/Localhost 0.19
375 TestNetworkPlugins/group/calico/HairPin 0.2
376 TestNetworkPlugins/group/flannel/Start 56.24
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.46
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/bridge/Start 77.22
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
385 TestNetworkPlugins/group/flannel/NetCatPod 9.35
386 TestNetworkPlugins/group/flannel/DNS 0.23
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.19
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 11.26
391 TestNetworkPlugins/group/bridge/DNS 0.22
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (6.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-217697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-217697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.989856913s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1009 18:46:26.836466    7596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1009 18:46:26.836547    7596 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-217697
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-217697: exit status 85 (68.928345ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-217697 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |          |
	|         | -p download-only-217697        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:19.892577    7601 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:19.892807    7601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:19.892839    7601 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:19.892861    7601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:19.893156    7601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	W1009 18:46:19.893305    7601 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19780-2290/.minikube/config/config.json: open /home/jenkins/minikube-integration/19780-2290/.minikube/config/config.json: no such file or directory
	I1009 18:46:19.893721    7601 out.go:352] Setting JSON to true
	I1009 18:46:19.894545    7601 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1723,"bootTime":1728497857,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 18:46:19.894655    7601 start.go:139] virtualization:  
	I1009 18:46:19.896933    7601 out.go:97] [download-only-217697] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1009 18:46:19.897092    7601 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:46:19.897191    7601 notify.go:220] Checking for updates...
	I1009 18:46:19.899082    7601 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:19.901008    7601 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:19.902842    7601 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:46:19.904607    7601 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 18:46:19.905977    7601 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:46:19.908493    7601 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:19.908778    7601 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:19.930219    7601 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:19.930331    7601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:20.358265    7601 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:46:20.348403679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:20.358431    7601 docker.go:318] overlay module found
	I1009 18:46:20.360212    7601 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:20.360242    7601 start.go:297] selected driver: docker
	I1009 18:46:20.360250    7601 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:20.360374    7601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:20.410165    7601 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:46:20.400899984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:20.410357    7601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:20.410656    7601 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:46:20.410836    7601 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:20.412706    7601 out.go:169] Using Docker driver with root privileges
	I1009 18:46:20.414583    7601 cni.go:84] Creating CNI manager for ""
	I1009 18:46:20.414644    7601 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:46:20.414657    7601 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:20.414740    7601 start.go:340] cluster config:
	{Name:download-only-217697 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-217697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:20.416335    7601 out.go:97] Starting "download-only-217697" primary control-plane node in "download-only-217697" cluster
	I1009 18:46:20.416354    7601 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1009 18:46:20.417869    7601 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:20.417896    7601 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1009 18:46:20.418047    7601 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:20.433367    7601 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:20.433549    7601 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:20.433652    7601 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:20.476305    7601 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1009 18:46:20.476329    7601 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:20.476468    7601 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1009 18:46:20.478228    7601 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 18:46:20.478249    7601 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1009 18:46:20.567388    7601 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-217697 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217697"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-217697
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-003905 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-003905 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.816437834s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1009 18:46:33.070807    7596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1009 18:46:33.070855    7596 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-003905
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-003905: exit status 85 (82.835997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-217697 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-217697        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-217697        | download-only-217697 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | -o=json --download-only        | download-only-003905 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-003905        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:27.301830    7802 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:27.301988    7802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:27.302001    7802 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:27.302007    7802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:27.302295    7802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 18:46:27.302811    7802 out.go:352] Setting JSON to true
	I1009 18:46:27.303657    7802 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1731,"bootTime":1728497857,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 18:46:27.303730    7802 start.go:139] virtualization:  
	I1009 18:46:27.305968    7802 out.go:97] [download-only-003905] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:46:27.306219    7802 notify.go:220] Checking for updates...
	I1009 18:46:27.308991    7802 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:27.310554    7802 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:27.312215    7802 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:46:27.313687    7802 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 18:46:27.315281    7802 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 18:46:27.318223    7802 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:27.318480    7802 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:27.339215    7802 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:27.339333    7802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:27.410813    7802 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:27.400971199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:27.410926    7802 docker.go:318] overlay module found
	I1009 18:46:27.413463    7802 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:27.413501    7802 start.go:297] selected driver: docker
	I1009 18:46:27.413508    7802 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:27.413610    7802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:27.466924    7802 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:27.457685799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:46:27.467132    7802 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:27.467392    7802 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1009 18:46:27.467550    7802 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:27.469454    7802 out.go:169] Using Docker driver with root privileges
	I1009 18:46:27.470710    7802 cni.go:84] Creating CNI manager for ""
	I1009 18:46:27.470771    7802 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1009 18:46:27.470783    7802 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:27.470858    7802 start.go:340] cluster config:
	{Name:download-only-003905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-003905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:27.472120    7802 out.go:97] Starting "download-only-003905" primary control-plane node in "download-only-003905" cluster
	I1009 18:46:27.472150    7802 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1009 18:46:27.473887    7802 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:27.473922    7802 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 18:46:27.474097    7802 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:27.489445    7802 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:27.489567    7802 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:27.489591    7802 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:27.489596    7802 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:27.489607    7802 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:27.535681    7802 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1009 18:46:27.535706    7802 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:27.535866    7802 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1009 18:46:27.538083    7802 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1009 18:46:27.538112    7802 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1009 18:46:27.624968    7802 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1009 18:46:31.493617    7802 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1009 18:46:31.493759    7802 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19780-2290/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-003905 host does not exist
	  To start a cluster, run: "minikube start -p download-only-003905"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-003905
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:46:34.322009    7596 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-587388 --alsologtostderr --binary-mirror http://127.0.0.1:34277 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-587388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-587388
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-514774
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-514774: exit status 85 (71.862449ms)

                                                
                                                
-- stdout --
	* Profile "addons-514774" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-514774"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-514774
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-514774: exit status 85 (73.349295ms)

                                                
                                                
-- stdout --
	* Profile "addons-514774" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-514774"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-514774 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-514774 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m34.859672645s)
--- PASS: TestAddons/Setup (214.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-514774 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-514774 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (10.82s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-514774 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-514774 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d8402ef0-62ee-4d9b-a243-e6b3629bd153] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d8402ef0-62ee-4d9b-a243-e6b3629bd153] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 10.005110433s
addons_test.go:633: (dbg) Run:  kubectl --context addons-514774 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-514774 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-514774 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-514774 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (10.82s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.580823ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-cbzc4" [243b50be-9240-4b5c-b75e-00643fe07edd] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004145584s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dthqj" [791df215-51d1-4c9b-969e-92e1807ed15a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003732834s
addons_test.go:331: (dbg) Run:  kubectl --context addons-514774 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-514774 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-514774 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.833943539s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 ip
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-514774 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-514774 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-514774 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bdeb7c68-a712-4152-b8f0-b6b0a193db8b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bdeb7c68-a712-4152-b8f0-b6b0a193db8b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003860482s
I1009 18:55:31.532145    7596 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-514774 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable ingress --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable ingress --alsologtostderr -v=1: (7.740478146s)
--- PASS: TestAddons/parallel/Ingress (18.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dtfmd" [bb833ad9-d366-4ae2-b5f6-2d4ffb20b5ad] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003927823s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable inspektor-gadget --alsologtostderr -v=1: (5.864103164s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.624867ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-rtf2q" [af6559a7-0300-48fd-8898-9f5ab34e7686] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003547015s
addons_test.go:402: (dbg) Run:  kubectl --context addons-514774 top pods -n kube-system
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:54:13.008960    7596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 18:54:13.013669    7596 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:54:13.013702    7596 kapi.go:107] duration metric: took 7.765736ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.776763ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-514774 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/10/09 18:54:17 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-514774 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [393b0153-54a7-4997-bf85-a4c2d136162b] Pending
helpers_test.go:344: "task-pv-pod" [393b0153-54a7-4997-bf85-a4c2d136162b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [393b0153-54a7-4997-bf85-a4c2d136162b] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004284028s
addons_test.go:511: (dbg) Run:  kubectl --context addons-514774 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-514774 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-514774 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-514774 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-514774 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-514774 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-514774 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fae072b6-3b41-4917-a1df-b3af390e105d] Pending
helpers_test.go:344: "task-pv-pod-restore" [fae072b6-3b41-4917-a1df-b3af390e105d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fae072b6-3b41-4917-a1df-b3af390e105d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003960864s
addons_test.go:553: (dbg) Run:  kubectl --context addons-514774 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-514774 delete pod task-pv-pod-restore: (1.169752503s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-514774 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-514774 delete volumesnapshot new-snapshot-demo
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861414479s)
--- PASS: TestAddons/parallel/CSI (43.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-514774 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-514774 --alsologtostderr -v=1: (1.008045778s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-sfhcb" [313eac0a-b9d0-40aa-b516-e9db9b8b1a42] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-sfhcb" [313eac0a-b9d0-40aa-b516-e9db9b8b1a42] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004071085s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable headlamp --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable headlamp --alsologtostderr -v=1: (5.730832057s)
--- PASS: TestAddons/parallel/Headlamp (16.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-kr752" [49419b2c-fddb-4bc6-a165-1f1e221dd457] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003909815s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-514774 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-514774 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2d834b84-d8de-47a6-afe8-b439ba4bf91d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2d834b84-d8de-47a6-afe8-b439ba4bf91d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2d834b84-d8de-47a6-afe8-b439ba4bf91d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003181504s
addons_test.go:902: (dbg) Run:  kubectl --context addons-514774 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 ssh "cat /opt/local-path-provisioner/pvc-86ef4ab8-c0af-48b4-959e-ba7b9261b064_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-514774 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-514774 delete pvc test-pvc
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.300788683s)
--- PASS: TestAddons/parallel/LocalPath (51.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b26r7" [f747350f-cab4-4932-aed7-6d57e3d8ab71] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003534822s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-85nlx" [fd93173f-dcd8-4869-8d52-7bb6c165016e] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004733576s
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 -p addons-514774 addons disable yakd --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-arm64 -p addons-514774 addons disable yakd --alsologtostderr -v=1: (5.829566428s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-514774
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-514774: (12.00794463s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-514774
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-514774
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-514774
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestCertOptions (37.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-480357 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-480357 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.089140614s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-480357 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-480357 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-480357 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-480357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-480357
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-480357: (2.057875862s)
--- PASS: TestCertOptions (37.81s)

                                                
                                    
x
+
TestCertExpiration (231.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-383078 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-383078 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.146930436s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-383078 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-383078 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (10.189625301s)
helpers_test.go:175: Cleaning up "cert-expiration-383078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-383078
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-383078: (2.263604615s)
--- PASS: TestCertExpiration (231.60s)

                                                
                                    
x
+
TestForceSystemdFlag (35.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-535862 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-535862 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.340160164s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-535862 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-535862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-535862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-535862: (2.071997221s)
--- PASS: TestForceSystemdFlag (35.72s)

                                                
                                    
x
+
TestForceSystemdEnv (41.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-872019 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-872019 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.57824701s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-872019 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-872019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-872019
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-872019: (2.447358268s)
--- PASS: TestForceSystemdEnv (41.36s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.73s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-251333 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-251333 --driver=docker  --container-runtime=containerd: (28.201160824s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-251333"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Byt0SPFCHnmu/agent.30184" SSH_AGENT_PID="30185" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Byt0SPFCHnmu/agent.30184" SSH_AGENT_PID="30185" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Byt0SPFCHnmu/agent.30184" SSH_AGENT_PID="30185" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.16385612s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Byt0SPFCHnmu/agent.30184" SSH_AGENT_PID="30185" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-251333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-251333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-251333: (1.962810956s)
--- PASS: TestDockerEnvContainerd (43.73s)

                                                
                                    
x
+
TestErrorSpam/setup (32.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-389545 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-389545 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-389545 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-389545 --driver=docker  --container-runtime=containerd: (32.165945655s)
--- PASS: TestErrorSpam/setup (32.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 stop: (1.270015303s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-389545 --log_dir /tmp/nospam-389545 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19780-2290/.minikube/files/etc/test/nested/copy/7596/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-072610 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.941221723s)
--- PASS: TestFunctional/serial/StartWithProxy (49.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:58:17.036447    7596 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-072610 --alsologtostderr -v=8: (5.859912897s)
functional_test.go:663: soft start took 5.863473892s for "functional-072610" cluster.
I1009 18:58:22.896753    7596 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-072610 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:3.1: (1.468787108s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:3.3: (1.364726903s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 cache add registry.k8s.io/pause:latest: (1.229573822s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-072610 /tmp/TestFunctionalserialCacheCmdcacheadd_local2585447787/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache add minikube-local-cache-test:functional-072610
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache delete minikube-local-cache-test:functional-072610
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-072610
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.027504ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 cache reload: (1.070171929s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 kubectl -- --context functional-072610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-072610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-072610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.289488734s)
functional_test.go:761: restart took 40.289585814s for "functional-072610" cluster.
I1009 18:59:11.486914    7596 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-072610 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 logs: (1.675328736s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 logs --file /tmp/TestFunctionalserialLogsFileCmd3781911474/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 logs --file /tmp/TestFunctionalserialLogsFileCmd3781911474/001/logs.txt: (1.72893171s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-072610 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-072610
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-072610: exit status 115 (645.19466ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30630 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-072610 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 config get cpus: exit status 14 (71.22695ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 config get cpus: exit status 14 (90.837922ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-072610 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-072610 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 44891: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-072610 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (180.309594ms)

                                                
                                                
-- stdout --
	* [functional-072610] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:59:52.031021   44511 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:59:52.031137   44511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:52.031142   44511 out.go:358] Setting ErrFile to fd 2...
	I1009 18:59:52.031147   44511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:52.031498   44511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 18:59:52.031898   44511 out.go:352] Setting JSON to false
	I1009 18:59:52.033015   44511 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2535,"bootTime":1728497857,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 18:59:52.033089   44511 start.go:139] virtualization:  
	I1009 18:59:52.035132   44511 out.go:177] * [functional-072610] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 18:59:52.036679   44511 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:59:52.036849   44511 notify.go:220] Checking for updates...
	I1009 18:59:52.039508   44511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:59:52.041037   44511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:59:52.042733   44511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 18:59:52.044337   44511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:59:52.045591   44511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:59:52.047261   44511 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 18:59:52.047787   44511 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:59:52.069161   44511 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:59:52.069292   44511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:59:52.141106   44511 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:59:52.131609701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:59:52.141215   44511 docker.go:318] overlay module found
	I1009 18:59:52.142820   44511 out.go:177] * Using the docker driver based on existing profile
	I1009 18:59:52.144241   44511 start.go:297] selected driver: docker
	I1009 18:59:52.144256   44511 start.go:901] validating driver "docker" against &{Name:functional-072610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-072610 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:59:52.144362   44511 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:59:52.146532   44511 out.go:201] 
	W1009 18:59:52.147952   44511 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:59:52.149456   44511 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-072610 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-072610 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (179.20391ms)

                                                
                                                
-- stdout --
	* [functional-072610] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:59:51.851404   44468 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:59:51.851587   44468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:51.851617   44468 out.go:358] Setting ErrFile to fd 2...
	I1009 18:59:51.851641   44468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:59:51.852479   44468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 18:59:51.852932   44468 out.go:352] Setting JSON to false
	I1009 18:59:51.853955   44468 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2535,"bootTime":1728497857,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 18:59:51.854056   44468 start.go:139] virtualization:  
	I1009 18:59:51.856257   44468 out.go:177] * [functional-072610] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1009 18:59:51.857920   44468 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:59:51.857987   44468 notify.go:220] Checking for updates...
	I1009 18:59:51.860455   44468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:59:51.861853   44468 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 18:59:51.863460   44468 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 18:59:51.864934   44468 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 18:59:51.867053   44468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:59:51.869100   44468 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 18:59:51.869678   44468 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:59:51.891349   44468 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:59:51.891569   44468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:59:51.960253   44468 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 18:59:51.949079738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 18:59:51.960374   44468 docker.go:318] overlay module found
	I1009 18:59:51.962376   44468 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:59:51.963816   44468 start.go:297] selected driver: docker
	I1009 18:59:51.963831   44468 start.go:901] validating driver "docker" against &{Name:functional-072610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-072610 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:59:51.963931   44468 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:59:51.966134   44468 out.go:201] 
	W1009 18:59:51.967588   44468 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:59:51.968844   44468 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-072610 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-072610 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-fr6b5" [b8d8ae74-3b8d-49ed-ba4e-8c3c68e69327] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-fr6b5" [b8d8ae74-3b8d-49ed-ba4e-8c3c68e69327] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003866332s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31925
functional_test.go:1675: http://192.168.49.2:31925: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-fr6b5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31925
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ecc154bf-ec19-40cb-ad08-d7fbe04a79b1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004385956s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-072610 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-072610 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-072610 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-072610 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2caa70b4-02f2-4024-a142-d82dc3a01e35] Pending
helpers_test.go:344: "sp-pod" [2caa70b4-02f2-4024-a142-d82dc3a01e35] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2caa70b4-02f2-4024-a142-d82dc3a01e35] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003707121s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-072610 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-072610 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-072610 delete -f testdata/storage-provisioner/pod.yaml: (1.284608743s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-072610 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [57c52b99-14ab-477f-b3cb-054635ac3180] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [57c52b99-14ab-477f-b3cb-054635ac3180] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003398233s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-072610 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh -n functional-072610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cp functional-072610:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1842410329/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh -n functional-072610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh -n functional-072610 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7596/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /etc/test/nested/copy/7596/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7596.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /etc/ssl/certs/7596.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7596.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /usr/share/ca-certificates/7596.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /etc/ssl/certs/75962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /usr/share/ca-certificates/75962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-072610 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "sudo systemctl is-active docker": exit status 1 (354.545176ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "sudo systemctl is-active crio": exit status 1 (364.439287ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 42183: os: process already finished
helpers_test.go:508: unable to kill pid 41992: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-072610 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3cf0827f-c8e9-48ba-b052-c3a7e8db3549] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3cf0827f-c8e9-48ba-b052-c3a7e8db3549] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004569357s
I1009 18:59:30.955134    7596 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-072610 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.95.235 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-072610 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-072610 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-072610 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-m24b7" [e0257666-15aa-41a6-9b6b-5df6d36f40c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-m24b7" [e0257666-15aa-41a6-9b6b-5df6d36f40c2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003935897s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "336.400972ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "57.653486ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "384.757289ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "74.304296ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdany-port4059115195/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728500388313467897" to /tmp/TestFunctionalparallelMountCmdany-port4059115195/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728500388313467897" to /tmp/TestFunctionalparallelMountCmdany-port4059115195/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728500388313467897" to /tmp/TestFunctionalparallelMountCmdany-port4059115195/001/test-1728500388313467897
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (469.179488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:59:48.782919    7596 retry.go:31] will retry after 356.47832ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:59 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:59 test-1728500388313467897
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh cat /mount-9p/test-1728500388313467897
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-072610 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [39c8b9ab-4993-4c29-b871-1e0421b873fa] Pending
helpers_test.go:344: "busybox-mount" [39c8b9ab-4993-4c29-b871-1e0421b873fa] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [39c8b9ab-4993-4c29-b871-1e0421b873fa] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [39c8b9ab-4993-4c29-b871-1e0421b873fa] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003644975s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-072610 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdany-port4059115195/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service list -o json
functional_test.go:1494: Took "509.456204ms" to run "out/minikube-linux-arm64 -p functional-072610 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32121
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32121
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdspecific-port232595188/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.089706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:59:56.918880    7596 retry.go:31] will retry after 277.524535ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdspecific-port232595188/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "sudo umount -f /mount-9p": exit status 1 (327.469089ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-072610 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdspecific-port232595188/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T" /mount1: exit status 1 (914.276411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:59:59.263737    7596 retry.go:31] will retry after 405.708568ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-072610 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-072610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup641921806/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 version -o=json --components: (1.279932545s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-072610 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-072610
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-072610
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-072610 image ls --format short --alsologtostderr:
I1009 19:00:09.284304   47480 out.go:345] Setting OutFile to fd 1 ...
I1009 19:00:09.284492   47480 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.284517   47480 out.go:358] Setting ErrFile to fd 2...
I1009 19:00:09.284536   47480 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.284907   47480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
I1009 19:00:09.285662   47480 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.285841   47480 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.286455   47480 cli_runner.go:164] Run: docker container inspect functional-072610 --format={{.State.Status}}
I1009 19:00:09.314091   47480 ssh_runner.go:195] Run: systemctl --version
I1009 19:00:09.314164   47480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-072610
I1009 19:00:09.342615   47480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/functional-072610/id_rsa Username:docker}
I1009 19:00:09.437589   47480 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-072610 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-072610  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-072610  | sha256:8eef95 | 991B   |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-072610 image ls --format table --alsologtostderr:
I1009 19:00:09.595069   47547 out.go:345] Setting OutFile to fd 1 ...
I1009 19:00:09.595268   47547 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.595281   47547 out.go:358] Setting ErrFile to fd 2...
I1009 19:00:09.595288   47547 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.595552   47547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
I1009 19:00:09.596401   47547 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.596605   47547 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.597334   47547 cli_runner.go:164] Run: docker container inspect functional-072610 --format={{.State.Status}}
I1009 19:00:09.618641   47547 ssh_runner.go:195] Run: systemctl --version
I1009 19:00:09.618699   47547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-072610
I1009 19:00:09.650097   47547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/functional-072610/id_rsa Username:docker}
I1009 19:00:09.742152   47547 ssh_runner.go:195] Run: sudo crictl images --output json
E1009 19:00:09.875651    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:09.882025    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:09.893371    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:09.914684    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:09.956017    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:10.037490    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-072610 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28
.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-072610"],"size":"2173567"},{"id":"sha256:8eef95442b0a25be
c072b03d65866b1b097a096144f355d79d946c4bdef47a39","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-072610"],"size":"991"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a60
1abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f
68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-072610 image ls --format json --alsologtostderr:
I1009 19:00:09.560477   47542 out.go:345] Setting OutFile to fd 1 ...
I1009 19:00:09.561146   47542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.561242   47542 out.go:358] Setting ErrFile to fd 2...
I1009 19:00:09.561265   47542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.561649   47542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
I1009 19:00:09.562732   47542 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.562980   47542 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.566983   47542 cli_runner.go:164] Run: docker container inspect functional-072610 --format={{.State.Status}}
I1009 19:00:09.596134   47542 ssh_runner.go:195] Run: systemctl --version
I1009 19:00:09.596185   47542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-072610
I1009 19:00:09.633546   47542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/functional-072610/id_rsa Username:docker}
I1009 19:00:09.725092   47542 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-072610 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-072610
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:8eef95442b0a25bec072b03d65866b1b097a096144f355d79d946c4bdef47a39
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-072610
size: "991"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-072610 image ls --format yaml --alsologtostderr:
I1009 19:00:09.268409   47481 out.go:345] Setting OutFile to fd 1 ...
I1009 19:00:09.268568   47481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.268577   47481 out.go:358] Setting ErrFile to fd 2...
I1009 19:00:09.268582   47481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:09.268908   47481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
I1009 19:00:09.269603   47481 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.269723   47481 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:09.270217   47481 cli_runner.go:164] Run: docker container inspect functional-072610 --format={{.State.Status}}
I1009 19:00:09.299668   47481 ssh_runner.go:195] Run: systemctl --version
I1009 19:00:09.299726   47481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-072610
I1009 19:00:09.325336   47481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/functional-072610/id_rsa Username:docker}
I1009 19:00:09.417139   47481 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-072610 ssh pgrep buildkitd: exit status 1 (271.872998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image build -t localhost/my-image:functional-072610 testdata/build --alsologtostderr
E1009 19:00:10.199680    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:10.521731    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:11.163803    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:12.445255    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 image build -t localhost/my-image:functional-072610 testdata/build --alsologtostderr: (2.86823449s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-072610 image build -t localhost/my-image:functional-072610 testdata/build --alsologtostderr:
I1009 19:00:10.105512   47662 out.go:345] Setting OutFile to fd 1 ...
I1009 19:00:10.105737   47662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:10.105751   47662 out.go:358] Setting ErrFile to fd 2...
I1009 19:00:10.105757   47662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:00:10.106045   47662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
I1009 19:00:10.106802   47662 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:10.108103   47662 config.go:182] Loaded profile config "functional-072610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1009 19:00:10.108781   47662 cli_runner.go:164] Run: docker container inspect functional-072610 --format={{.State.Status}}
I1009 19:00:10.125870   47662 ssh_runner.go:195] Run: systemctl --version
I1009 19:00:10.125926   47662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-072610
I1009 19:00:10.144009   47662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/functional-072610/id_rsa Username:docker}
I1009 19:00:10.233208   47662 build_images.go:161] Building image from path: /tmp/build.1045617920.tar
I1009 19:00:10.233287   47662 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:00:10.243660   47662 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1045617920.tar
I1009 19:00:10.247390   47662 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1045617920.tar: stat -c "%s %y" /var/lib/minikube/build/build.1045617920.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1045617920.tar': No such file or directory
I1009 19:00:10.247422   47662 ssh_runner.go:362] scp /tmp/build.1045617920.tar --> /var/lib/minikube/build/build.1045617920.tar (3072 bytes)
I1009 19:00:10.274539   47662 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1045617920
I1009 19:00:10.284151   47662 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1045617920 -xf /var/lib/minikube/build/build.1045617920.tar
I1009 19:00:10.294134   47662 containerd.go:394] Building image: /var/lib/minikube/build/build.1045617920
I1009 19:00:10.294260   47662 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1045617920 --local dockerfile=/var/lib/minikube/build/build.1045617920 --output type=image,name=localhost/my-image:functional-072610
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:020af11081f323cd9b3a4c76131aa2ca4e1c8a7b8d08e3e68f43a69eaed1644e
#8 exporting manifest sha256:020af11081f323cd9b3a4c76131aa2ca4e1c8a7b8d08e3e68f43a69eaed1644e done
#8 exporting config sha256:8e007c9939327e70b4eda77005fa17c7475eb6f5c6c016c868c91bf9c882a933 done
#8 naming to localhost/my-image:functional-072610 done
#8 DONE 0.1s
I1009 19:00:12.892427   47662 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1045617920 --local dockerfile=/var/lib/minikube/build/build.1045617920 --output type=image,name=localhost/my-image:functional-072610: (2.598126733s)
I1009 19:00:12.892500   47662 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1045617920
I1009 19:00:12.902334   47662 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1045617920.tar
I1009 19:00:12.911085   47662 build_images.go:217] Built localhost/my-image:functional-072610 from /tmp/build.1045617920.tar
I1009 19:00:12.911116   47662 build_images.go:133] succeeded building to: functional-072610
I1009 19:00:12.911122   47662 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/10/09 19:00:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-072610
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr: (1.16005645s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr: (1.051260267s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-072610
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-072610 image load --daemon kicbase/echo-server:functional-072610 --alsologtostderr: (1.004430712s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image save kicbase/echo-server:functional-072610 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image rm kicbase/echo-server:functional-072610 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-072610
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-072610 image save --daemon kicbase/echo-server:functional-072610 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-072610
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-072610
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-072610
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-072610
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (111.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-088838 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1009 19:00:20.128363    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:30.370654    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:00:50.852310    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:01:31.814587    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-088838 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m50.227319329s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (111.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (46.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-088838 -- rollout status deployment/busybox: (43.77606485s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-gxx25 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-wf9gh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-xg2dl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-gxx25 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-wf9gh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-xg2dl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-gxx25 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-wf9gh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-xg2dl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (46.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- get pods -o jsonpath='{.items[*].metadata.name}'
E1009 19:02:53.736204    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-gxx25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-gxx25 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-wf9gh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-wf9gh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-xg2dl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-088838 -- exec busybox-7dff88458-xg2dl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-088838 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-088838 -v=7 --alsologtostderr: (20.446284272s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-088838 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.017836533s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp testdata/cp-test.txt ha-088838:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2107854145/001/cp-test_ha-088838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838:/home/docker/cp-test.txt ha-088838-m02:/home/docker/cp-test_ha-088838_ha-088838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test_ha-088838_ha-088838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838:/home/docker/cp-test.txt ha-088838-m03:/home/docker/cp-test_ha-088838_ha-088838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test_ha-088838_ha-088838-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838:/home/docker/cp-test.txt ha-088838-m04:/home/docker/cp-test_ha-088838_ha-088838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test_ha-088838_ha-088838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp testdata/cp-test.txt ha-088838-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2107854145/001/cp-test_ha-088838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m02:/home/docker/cp-test.txt ha-088838:/home/docker/cp-test_ha-088838-m02_ha-088838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test_ha-088838-m02_ha-088838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m02:/home/docker/cp-test.txt ha-088838-m03:/home/docker/cp-test_ha-088838-m02_ha-088838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test_ha-088838-m02_ha-088838-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m02:/home/docker/cp-test.txt ha-088838-m04:/home/docker/cp-test_ha-088838-m02_ha-088838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test_ha-088838-m02_ha-088838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp testdata/cp-test.txt ha-088838-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2107854145/001/cp-test_ha-088838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m03:/home/docker/cp-test.txt ha-088838:/home/docker/cp-test_ha-088838-m03_ha-088838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test_ha-088838-m03_ha-088838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m03:/home/docker/cp-test.txt ha-088838-m02:/home/docker/cp-test_ha-088838-m03_ha-088838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test_ha-088838-m03_ha-088838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m03:/home/docker/cp-test.txt ha-088838-m04:/home/docker/cp-test_ha-088838-m03_ha-088838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test_ha-088838-m03_ha-088838-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp testdata/cp-test.txt ha-088838-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2107854145/001/cp-test_ha-088838-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m04:/home/docker/cp-test.txt ha-088838:/home/docker/cp-test_ha-088838-m04_ha-088838.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838 "sudo cat /home/docker/cp-test_ha-088838-m04_ha-088838.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m04:/home/docker/cp-test.txt ha-088838-m02:/home/docker/cp-test_ha-088838-m04_ha-088838-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m02 "sudo cat /home/docker/cp-test_ha-088838-m04_ha-088838-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 cp ha-088838-m04:/home/docker/cp-test.txt ha-088838-m03:/home/docker/cp-test_ha-088838-m04_ha-088838-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 ssh -n ha-088838-m03 "sudo cat /home/docker/cp-test_ha-088838-m04_ha-088838-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 node stop m02 -v=7 --alsologtostderr: (12.126440583s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr: exit status 7 (739.105193ms)

                                                
                                                
-- stdout --
	ha-088838
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-088838-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-088838-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-088838-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:03:48.542203   63902 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:03:48.542367   63902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:03:48.542375   63902 out.go:358] Setting ErrFile to fd 2...
	I1009 19:03:48.542381   63902 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:03:48.542640   63902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:03:48.542820   63902 out.go:352] Setting JSON to false
	I1009 19:03:48.542857   63902 mustload.go:65] Loading cluster: ha-088838
	I1009 19:03:48.542931   63902 notify.go:220] Checking for updates...
	I1009 19:03:48.544172   63902 config.go:182] Loaded profile config "ha-088838": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:03:48.544203   63902 status.go:174] checking status of ha-088838 ...
	I1009 19:03:48.544966   63902 cli_runner.go:164] Run: docker container inspect ha-088838 --format={{.State.Status}}
	I1009 19:03:48.562248   63902 status.go:371] ha-088838 host status = "Running" (err=<nil>)
	I1009 19:03:48.562283   63902 host.go:66] Checking if "ha-088838" exists ...
	I1009 19:03:48.562589   63902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-088838
	I1009 19:03:48.599586   63902 host.go:66] Checking if "ha-088838" exists ...
	I1009 19:03:48.599894   63902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:03:48.599939   63902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-088838
	I1009 19:03:48.622514   63902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/ha-088838/id_rsa Username:docker}
	I1009 19:03:48.718262   63902 ssh_runner.go:195] Run: systemctl --version
	I1009 19:03:48.723128   63902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:03:48.736038   63902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:03:48.803818   63902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-09 19:03:48.792609962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:03:48.804710   63902 kubeconfig.go:125] found "ha-088838" server: "https://192.168.49.254:8443"
	I1009 19:03:48.804756   63902 api_server.go:166] Checking apiserver status ...
	I1009 19:03:48.804809   63902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:03:48.819204   63902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1467/cgroup
	I1009 19:03:48.828491   63902 api_server.go:182] apiserver freezer: "7:freezer:/docker/f022f2e51d65fef07159444be9c68a0790c0744939e9174a747b68429482dd2e/kubepods/burstable/poddfcfc799388a131091fae3d371fbeb7f/afbfd9b8f1aed9cd91f42d5129d63e80c8b94c5a11dcd6fcbbb6c356f54c76cb"
	I1009 19:03:48.828568   63902 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f022f2e51d65fef07159444be9c68a0790c0744939e9174a747b68429482dd2e/kubepods/burstable/poddfcfc799388a131091fae3d371fbeb7f/afbfd9b8f1aed9cd91f42d5129d63e80c8b94c5a11dcd6fcbbb6c356f54c76cb/freezer.state
	I1009 19:03:48.837676   63902 api_server.go:204] freezer state: "THAWED"
	I1009 19:03:48.837715   63902 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:03:48.845814   63902 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:03:48.845839   63902 status.go:463] ha-088838 apiserver status = Running (err=<nil>)
	I1009 19:03:48.845849   63902 status.go:176] ha-088838 status: &{Name:ha-088838 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:03:48.845867   63902 status.go:174] checking status of ha-088838-m02 ...
	I1009 19:03:48.846178   63902 cli_runner.go:164] Run: docker container inspect ha-088838-m02 --format={{.State.Status}}
	I1009 19:03:48.863487   63902 status.go:371] ha-088838-m02 host status = "Stopped" (err=<nil>)
	I1009 19:03:48.863508   63902 status.go:384] host is not running, skipping remaining checks
	I1009 19:03:48.863515   63902 status.go:176] ha-088838-m02 status: &{Name:ha-088838-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:03:48.863535   63902 status.go:174] checking status of ha-088838-m03 ...
	I1009 19:03:48.863839   63902 cli_runner.go:164] Run: docker container inspect ha-088838-m03 --format={{.State.Status}}
	I1009 19:03:48.881585   63902 status.go:371] ha-088838-m03 host status = "Running" (err=<nil>)
	I1009 19:03:48.881610   63902 host.go:66] Checking if "ha-088838-m03" exists ...
	I1009 19:03:48.881929   63902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-088838-m03
	I1009 19:03:48.899358   63902 host.go:66] Checking if "ha-088838-m03" exists ...
	I1009 19:03:48.899737   63902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:03:48.899786   63902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-088838-m03
	I1009 19:03:48.920849   63902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/ha-088838-m03/id_rsa Username:docker}
	I1009 19:03:49.011598   63902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:03:49.025368   63902 kubeconfig.go:125] found "ha-088838" server: "https://192.168.49.254:8443"
	I1009 19:03:49.025400   63902 api_server.go:166] Checking apiserver status ...
	I1009 19:03:49.025447   63902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:03:49.037165   63902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	I1009 19:03:49.047131   63902 api_server.go:182] apiserver freezer: "7:freezer:/docker/421eba3b93903eee512d883d45a082b90a60b09cfeb0047bd5d4887f3bfb5ad4/kubepods/burstable/pode4b59bbc9c8a73553596975bc0f4e60c/d2552ab01a53a005ba0bc00d2d7d3936bef057cfab04e383e4ce87a83394927a"
	I1009 19:03:49.047252   63902 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/421eba3b93903eee512d883d45a082b90a60b09cfeb0047bd5d4887f3bfb5ad4/kubepods/burstable/pode4b59bbc9c8a73553596975bc0f4e60c/d2552ab01a53a005ba0bc00d2d7d3936bef057cfab04e383e4ce87a83394927a/freezer.state
	I1009 19:03:49.056079   63902 api_server.go:204] freezer state: "THAWED"
	I1009 19:03:49.056108   63902 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:03:49.063917   63902 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:03:49.063945   63902 status.go:463] ha-088838-m03 apiserver status = Running (err=<nil>)
	I1009 19:03:49.063954   63902 status.go:176] ha-088838-m03 status: &{Name:ha-088838-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:03:49.063972   63902 status.go:174] checking status of ha-088838-m04 ...
	I1009 19:03:49.064294   63902 cli_runner.go:164] Run: docker container inspect ha-088838-m04 --format={{.State.Status}}
	I1009 19:03:49.081492   63902 status.go:371] ha-088838-m04 host status = "Running" (err=<nil>)
	I1009 19:03:49.081516   63902 host.go:66] Checking if "ha-088838-m04" exists ...
	I1009 19:03:49.081823   63902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-088838-m04
	I1009 19:03:49.102083   63902 host.go:66] Checking if "ha-088838-m04" exists ...
	I1009 19:03:49.102551   63902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:03:49.104237   63902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-088838-m04
	I1009 19:03:49.123885   63902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/ha-088838-m04/id_rsa Username:docker}
	I1009 19:03:49.213974   63902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:03:49.226600   63902 status.go:176] ha-088838-m04 status: &{Name:ha-088838-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 node start m02 -v=7 --alsologtostderr: (17.412625708s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr: (1.033064814s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.331856028s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-088838 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-088838 -v=7 --alsologtostderr
E1009 19:04:21.512674    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.519098    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.530427    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.551786    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.593160    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.674555    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:21.836001    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:22.157409    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:22.799388    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:24.081591    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:26.643923    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:31.765849    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:42.007790    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-088838 -v=7 --alsologtostderr: (37.399517103s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-088838 --wait=true -v=7 --alsologtostderr
E1009 19:05:02.489176    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:09.875275    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:37.578246    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:05:43.451347    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-088838 --wait=true -v=7 --alsologtostderr: (2m1.197982263s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-088838
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (158.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 node delete m03 -v=7 --alsologtostderr: (9.575811346s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 stop -v=7 --alsologtostderr
E1009 19:07:05.374277    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 stop -v=7 --alsologtostderr: (35.952035161s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr: exit status 7 (116.913085ms)

                                                
                                                
-- stdout --
	ha-088838
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-088838-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-088838-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:07:35.817389   78280 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:07:35.817575   78280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:35.817604   78280 out.go:358] Setting ErrFile to fd 2...
	I1009 19:07:35.817626   78280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:07:35.817895   78280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:07:35.818113   78280 out.go:352] Setting JSON to false
	I1009 19:07:35.818177   78280 mustload.go:65] Loading cluster: ha-088838
	I1009 19:07:35.818265   78280 notify.go:220] Checking for updates...
	I1009 19:07:35.818687   78280 config.go:182] Loaded profile config "ha-088838": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:07:35.818751   78280 status.go:174] checking status of ha-088838 ...
	I1009 19:07:35.819555   78280 cli_runner.go:164] Run: docker container inspect ha-088838 --format={{.State.Status}}
	I1009 19:07:35.837199   78280 status.go:371] ha-088838 host status = "Stopped" (err=<nil>)
	I1009 19:07:35.837220   78280 status.go:384] host is not running, skipping remaining checks
	I1009 19:07:35.837227   78280 status.go:176] ha-088838 status: &{Name:ha-088838 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:07:35.837250   78280 status.go:174] checking status of ha-088838-m02 ...
	I1009 19:07:35.837585   78280 cli_runner.go:164] Run: docker container inspect ha-088838-m02 --format={{.State.Status}}
	I1009 19:07:35.854721   78280 status.go:371] ha-088838-m02 host status = "Stopped" (err=<nil>)
	I1009 19:07:35.854741   78280 status.go:384] host is not running, skipping remaining checks
	I1009 19:07:35.854748   78280 status.go:176] ha-088838-m02 status: &{Name:ha-088838-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:07:35.854768   78280 status.go:174] checking status of ha-088838-m04 ...
	I1009 19:07:35.855073   78280 cli_runner.go:164] Run: docker container inspect ha-088838-m04 --format={{.State.Status}}
	I1009 19:07:35.881449   78280 status.go:371] ha-088838-m04 host status = "Stopped" (err=<nil>)
	I1009 19:07:35.881472   78280 status.go:384] host is not running, skipping remaining checks
	I1009 19:07:35.881480   78280 status.go:176] ha-088838-m04 status: &{Name:ha-088838-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (40.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-088838 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-088838 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (39.211661811s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (40.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-088838 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-088838 --control-plane -v=7 --alsologtostderr: (42.182025664s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-088838 status -v=7 --alsologtostderr: (1.039135971s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.020467487s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-615128 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1009 19:09:21.512785    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:09:49.216825    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-615128 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (56.232333959s)
--- PASS: TestJSONOutput/start/Command (56.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-615128 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-615128 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-615128 --output=json --user=testUser
E1009 19:10:09.875066    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-615128 --output=json --user=testUser: (5.759690564s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-273237 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-273237 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.070152ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"add9b6b7-f295-4249-9d04-4e7efbf0b220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-273237] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e42eaacb-a398-44f8-9e30-3315da654a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"cf6f796a-560f-4a6b-b9ef-edb2802a88e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0bc811a4-84bd-4a1a-8c0d-579232b4747a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig"}}
	{"specversion":"1.0","id":"94bcedc6-27cc-45c3-84a6-e8a64198e471","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube"}}
	{"specversion":"1.0","id":"88cd3c89-111c-43c9-964c-344879ebc261","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"146926bc-0706-42d9-904d-62d76f1b8286","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bf48b286-9719-4e9d-9281-9bc4f10c367d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-273237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-273237
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-118100 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-118100 --network=: (37.408641802s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-118100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-118100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-118100: (2.072085769s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-816711 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-816711 --network=bridge: (29.810563797s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-816711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-816711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-816711: (1.979118011s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.82s)

                                                
                                    
x
+
TestKicExistingNetwork (30.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:11:28.562647    7596 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:11:28.576782    7596 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:11:28.576850    7596 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:11:28.576866    7596 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:11:28.597693    7596 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:11:28.597722    7596 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:11:28.597826    7596 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:11:28.597938    7596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:11:28.614244    7596 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bda550f8dcd8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:31:98:1f:4a} reservation:<nil>}
I1009 19:11:28.614546    7596 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019673b0}
I1009 19:11:28.614571    7596 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 19:11:28.614629    7596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:11:28.685823    7596 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-843974 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-843974 --network=existing-network: (28.364651694s)
helpers_test.go:175: Cleaning up "existing-network-843974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-843974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-843974: (1.912444579s)
I1009 19:11:58.979068    7596 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.43s)

                                                
                                    
x
+
TestKicCustomSubnet (33.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-849213 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-849213 --subnet=192.168.60.0/24: (31.024924409s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-849213 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-849213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-849213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-849213: (2.123607824s)
--- PASS: TestKicCustomSubnet (33.17s)

                                                
                                    
x
+
TestKicStaticIP (34.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-657326 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-657326 --static-ip=192.168.200.200: (32.669234917s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-657326 ip
helpers_test.go:175: Cleaning up "static-ip-657326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-657326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-657326: (2.054035463s)
--- PASS: TestKicStaticIP (34.87s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-032334 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-032334 --driver=docker  --container-runtime=containerd: (30.410277397s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-035132 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-035132 --driver=docker  --container-runtime=containerd: (31.913793968s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-032334
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-035132
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-035132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-035132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-035132: (1.942171672s)
helpers_test.go:175: Cleaning up "first-032334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-032334
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-032334: (1.977303991s)
--- PASS: TestMinikubeProfile (67.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-048185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-048185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.476021051s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-048185 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-049982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1009 19:14:21.511624    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-049982 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.249184358s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-049982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-048185 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-048185 --alsologtostderr -v=5: (1.613088051s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-049982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-049982
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-049982: (1.200766209s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-049982
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-049982: (6.507546967s)
--- PASS: TestMountStart/serial/RestartStopped (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-049982 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-527113 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1009 19:15:09.875366    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-527113 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.079638008s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-527113 -- rollout status deployment/busybox: (17.933922324s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-788v7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-kls9k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-788v7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-kls9k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-788v7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-kls9k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-788v7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-788v7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-kls9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-527113 -- exec busybox-7dff88458-kls9k -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-527113 -v 3 --alsologtostderr
E1009 19:16:32.940356    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-527113 -v 3 --alsologtostderr: (15.391081607s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-527113 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp testdata/cp-test.txt multinode-527113:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile423377909/001/cp-test_multinode-527113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113:/home/docker/cp-test.txt multinode-527113-m02:/home/docker/cp-test_multinode-527113_multinode-527113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test_multinode-527113_multinode-527113-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113:/home/docker/cp-test.txt multinode-527113-m03:/home/docker/cp-test_multinode-527113_multinode-527113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test_multinode-527113_multinode-527113-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp testdata/cp-test.txt multinode-527113-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile423377909/001/cp-test_multinode-527113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m02:/home/docker/cp-test.txt multinode-527113:/home/docker/cp-test_multinode-527113-m02_multinode-527113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test_multinode-527113-m02_multinode-527113.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m02:/home/docker/cp-test.txt multinode-527113-m03:/home/docker/cp-test_multinode-527113-m02_multinode-527113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test_multinode-527113-m02_multinode-527113-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp testdata/cp-test.txt multinode-527113-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile423377909/001/cp-test_multinode-527113-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m03:/home/docker/cp-test.txt multinode-527113:/home/docker/cp-test_multinode-527113-m03_multinode-527113.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113 "sudo cat /home/docker/cp-test_multinode-527113-m03_multinode-527113.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 cp multinode-527113-m03:/home/docker/cp-test.txt multinode-527113-m02:/home/docker/cp-test_multinode-527113-m03_multinode-527113-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 ssh -n multinode-527113-m02 "sudo cat /home/docker/cp-test_multinode-527113-m03_multinode-527113-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-527113 node stop m03: (1.220624848s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-527113 status: exit status 7 (509.792402ms)

                                                
                                                
-- stdout --
	multinode-527113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-527113-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-527113-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr: exit status 7 (509.920441ms)

                                                
                                                
-- stdout --
	multinode-527113
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-527113-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-527113-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:16:53.623734  131737 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:16:53.623852  131737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:16:53.623863  131737 out.go:358] Setting ErrFile to fd 2...
	I1009 19:16:53.623868  131737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:16:53.624110  131737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:16:53.624279  131737 out.go:352] Setting JSON to false
	I1009 19:16:53.624317  131737 mustload.go:65] Loading cluster: multinode-527113
	I1009 19:16:53.624406  131737 notify.go:220] Checking for updates...
	I1009 19:16:53.624734  131737 config.go:182] Loaded profile config "multinode-527113": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:16:53.624750  131737 status.go:174] checking status of multinode-527113 ...
	I1009 19:16:53.625266  131737 cli_runner.go:164] Run: docker container inspect multinode-527113 --format={{.State.Status}}
	I1009 19:16:53.644963  131737 status.go:371] multinode-527113 host status = "Running" (err=<nil>)
	I1009 19:16:53.644993  131737 host.go:66] Checking if "multinode-527113" exists ...
	I1009 19:16:53.645309  131737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-527113
	I1009 19:16:53.671267  131737 host.go:66] Checking if "multinode-527113" exists ...
	I1009 19:16:53.671652  131737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:16:53.671704  131737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-527113
	I1009 19:16:53.692439  131737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/multinode-527113/id_rsa Username:docker}
	I1009 19:16:53.785576  131737 ssh_runner.go:195] Run: systemctl --version
	I1009 19:16:53.789642  131737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:16:53.801017  131737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:16:53.848965  131737 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-09 19:16:53.83910823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:16:53.849569  131737 kubeconfig.go:125] found "multinode-527113" server: "https://192.168.67.2:8443"
	I1009 19:16:53.849604  131737 api_server.go:166] Checking apiserver status ...
	I1009 19:16:53.849659  131737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:16:53.860984  131737 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	I1009 19:16:53.870070  131737 api_server.go:182] apiserver freezer: "7:freezer:/docker/d4c129c7ffe6c35fbf83d6d5d3bfc519b9df2383ea59b352856b5177f4057be0/kubepods/burstable/pod0758c26fa81ccc4df32774cf0e402295/64d8c19da45cc6facb7c3240c37cd8b7bacb5d2d697fb5e74141425f4b8fc48f"
	I1009 19:16:53.870183  131737 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d4c129c7ffe6c35fbf83d6d5d3bfc519b9df2383ea59b352856b5177f4057be0/kubepods/burstable/pod0758c26fa81ccc4df32774cf0e402295/64d8c19da45cc6facb7c3240c37cd8b7bacb5d2d697fb5e74141425f4b8fc48f/freezer.state
	I1009 19:16:53.878817  131737 api_server.go:204] freezer state: "THAWED"
	I1009 19:16:53.878855  131737 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 19:16:53.886331  131737 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 19:16:53.886360  131737 status.go:463] multinode-527113 apiserver status = Running (err=<nil>)
	I1009 19:16:53.886371  131737 status.go:176] multinode-527113 status: &{Name:multinode-527113 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:16:53.886387  131737 status.go:174] checking status of multinode-527113-m02 ...
	I1009 19:16:53.886718  131737 cli_runner.go:164] Run: docker container inspect multinode-527113-m02 --format={{.State.Status}}
	I1009 19:16:53.903117  131737 status.go:371] multinode-527113-m02 host status = "Running" (err=<nil>)
	I1009 19:16:53.903139  131737 host.go:66] Checking if "multinode-527113-m02" exists ...
	I1009 19:16:53.903457  131737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-527113-m02
	I1009 19:16:53.920160  131737 host.go:66] Checking if "multinode-527113-m02" exists ...
	I1009 19:16:53.920453  131737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:16:53.920506  131737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-527113-m02
	I1009 19:16:53.942661  131737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19780-2290/.minikube/machines/multinode-527113-m02/id_rsa Username:docker}
	I1009 19:16:54.037984  131737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:16:54.058156  131737 status.go:176] multinode-527113-m02 status: &{Name:multinode-527113-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:16:54.058190  131737 status.go:174] checking status of multinode-527113-m03 ...
	I1009 19:16:54.058503  131737 cli_runner.go:164] Run: docker container inspect multinode-527113-m03 --format={{.State.Status}}
	I1009 19:16:54.080283  131737 status.go:371] multinode-527113-m03 host status = "Stopped" (err=<nil>)
	I1009 19:16:54.080306  131737 status.go:384] host is not running, skipping remaining checks
	I1009 19:16:54.080313  131737 status.go:176] multinode-527113-m03 status: &{Name:multinode-527113-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-527113 node start m03 -v=7 --alsologtostderr: (8.717167833s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-527113
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-527113
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-527113: (24.996783541s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-527113 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-527113 --wait=true -v=8 --alsologtostderr: (55.21731749s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-527113
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-527113 node delete m03: (4.539157578s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-527113 stop: (23.882213843s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-527113 status: exit status 7 (86.643347ms)

                                                
                                                
-- stdout --
	multinode-527113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-527113-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr: exit status 7 (93.284002ms)

                                                
                                                
-- stdout --
	multinode-527113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-527113-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:18:53.102037  139724 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:18:53.102225  139724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:53.102251  139724 out.go:358] Setting ErrFile to fd 2...
	I1009 19:18:53.102269  139724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:53.102538  139724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:18:53.102758  139724 out.go:352] Setting JSON to false
	I1009 19:18:53.102822  139724 mustload.go:65] Loading cluster: multinode-527113
	I1009 19:18:53.102915  139724 notify.go:220] Checking for updates...
	I1009 19:18:53.103311  139724 config.go:182] Loaded profile config "multinode-527113": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:18:53.103356  139724 status.go:174] checking status of multinode-527113 ...
	I1009 19:18:53.103940  139724 cli_runner.go:164] Run: docker container inspect multinode-527113 --format={{.State.Status}}
	I1009 19:18:53.122137  139724 status.go:371] multinode-527113 host status = "Stopped" (err=<nil>)
	I1009 19:18:53.122159  139724 status.go:384] host is not running, skipping remaining checks
	I1009 19:18:53.122167  139724 status.go:176] multinode-527113 status: &{Name:multinode-527113 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:18:53.122199  139724 status.go:174] checking status of multinode-527113-m02 ...
	I1009 19:18:53.122535  139724 cli_runner.go:164] Run: docker container inspect multinode-527113-m02 --format={{.State.Status}}
	I1009 19:18:53.140836  139724 status.go:371] multinode-527113-m02 host status = "Stopped" (err=<nil>)
	I1009 19:18:53.140862  139724 status.go:384] host is not running, skipping remaining checks
	I1009 19:18:53.140869  139724 status.go:176] multinode-527113-m02 status: &{Name:multinode-527113-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-527113 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1009 19:19:21.511832    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-527113 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.774659713s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-527113 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-527113
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-527113-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-527113-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.48941ms)

                                                
                                                
-- stdout --
	* [multinode-527113-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-527113-m02' is duplicated with machine name 'multinode-527113-m02' in profile 'multinode-527113'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-527113-m03 --driver=docker  --container-runtime=containerd
E1009 19:20:09.875183    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-527113-m03 --driver=docker  --container-runtime=containerd: (29.716686888s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-527113
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-527113: exit status 80 (304.838526ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-527113 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-527113-m03 already exists in multinode-527113-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-527113-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-527113-m03: (1.943522759s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.11s)

                                                
                                    
x
+
TestPreload (127.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-250483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1009 19:20:44.580663    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-250483 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.964483244s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-250483 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-250483 image pull gcr.io/k8s-minikube/busybox: (1.955481068s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-250483
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-250483: (12.0436719s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-250483 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-250483 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.551050709s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-250483 image list
helpers_test.go:175: Cleaning up "test-preload-250483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-250483
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-250483: (2.421436792s)
--- PASS: TestPreload (127.23s)

                                                
                                    
x
+
TestScheduledStopUnix (109.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-083588 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-083588 --memory=2048 --driver=docker  --container-runtime=containerd: (32.533224803s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-083588 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-083588 -n scheduled-stop-083588
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-083588 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:22:57.918289    7596 retry.go:31] will retry after 141.209µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.919517    7596 retry.go:31] will retry after 189.508µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.920677    7596 retry.go:31] will retry after 233.583µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.921768    7596 retry.go:31] will retry after 269.207µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.922890    7596 retry.go:31] will retry after 713.703µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.923977    7596 retry.go:31] will retry after 907.273µs: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.925075    7596 retry.go:31] will retry after 1.458047ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.927263    7596 retry.go:31] will retry after 1.348022ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.929419    7596 retry.go:31] will retry after 1.894588ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.931625    7596 retry.go:31] will retry after 2.112131ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.934827    7596 retry.go:31] will retry after 7.768945ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.943047    7596 retry.go:31] will retry after 5.476663ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.949283    7596 retry.go:31] will retry after 11.289601ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.961500    7596 retry.go:31] will retry after 23.912083ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:57.985726    7596 retry.go:31] will retry after 24.314838ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
I1009 19:22:58.010945    7596 retry.go:31] will retry after 45.584935ms: open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/scheduled-stop-083588/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-083588 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-083588 -n scheduled-stop-083588
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-083588
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-083588 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-083588
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-083588: exit status 7 (76.088595ms)

                                                
                                                
-- stdout --
	scheduled-stop-083588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-083588 -n scheduled-stop-083588
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-083588 -n scheduled-stop-083588: exit status 7 (67.012035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-083588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-083588
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-083588: (5.06848079s)
--- PASS: TestScheduledStopUnix (109.16s)

                                                
                                    
x
+
TestInsufficientStorage (10.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-845381 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1009 19:24:21.512804    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-845381 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.915329346s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0816f8a0-7d10-4140-baf9-2adff515e8f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-845381] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"23a3bf3a-31d7-4103-ae83-e8b546c95e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"5a53a226-a919-4f0f-99ac-7de2e6681514","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"815dda50-379d-46a6-b9fb-de6878b98e50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig"}}
	{"specversion":"1.0","id":"fc2ea3c7-75d7-4cb2-bd97-0c5d1c2f3cd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube"}}
	{"specversion":"1.0","id":"b3b1cbf4-d4e9-4b13-815e-cca2a09b01dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e50c7c0b-d20d-42c3-a386-eebd03744923","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1e55a3a7-42a1-414d-bcce-7bb73d8c66d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c156c05c-6b9a-4146-8581-6f94bea5357f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"598bfa88-ca4f-4212-a666-0a8bb52991b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce1be851-d4ee-452c-a126-1ae3e72c4d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1275392c-d519-48f5-adc4-f3ad4da8b3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-845381\" primary control-plane node in \"insufficient-storage-845381\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"212bd166-4c83-48ef-8b00-b8703361f64b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b56a080-56fd-4d56-ada8-ef0b1304d347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f323f549-e3d0-4a92-9a9c-84dba759c8a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-845381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-845381 --output=json --layout=cluster: exit status 7 (281.368448ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-845381","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-845381","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:24:22.224246  158718 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-845381" does not appear in /home/jenkins/minikube-integration/19780-2290/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-845381 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-845381 --output=json --layout=cluster: exit status 7 (270.006386ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-845381","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-845381","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:24:22.492503  158777 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-845381" does not appear in /home/jenkins/minikube-integration/19780-2290/kubeconfig
	E1009 19:24:22.502602  158777 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/insufficient-storage-845381/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-845381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-845381
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-845381: (1.858325718s)
--- PASS: TestInsufficientStorage (10.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.747068028 start -p running-upgrade-534142 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.747068028 start -p running-upgrade-534142 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (50.269521538s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-534142 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-534142 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.163358378s)
helpers_test.go:175: Cleaning up "running-upgrade-534142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-534142
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-534142: (2.758097993s)
--- PASS: TestRunningBinaryUpgrade (84.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (105.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.701653511s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-459847
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-459847: (1.360629155s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-459847 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-459847 status --format={{.Host}}: exit status 7 (83.930441ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.044218468s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-459847 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (128.339186ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-459847] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-459847
	    minikube start -p kubernetes-upgrade-459847 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4598472 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-459847 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-459847 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.243449548s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-459847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-459847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-459847: (2.614714857s)
--- PASS: TestKubernetesUpgrade (105.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.739297554 start -p missing-upgrade-715630 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.739297554 start -p missing-upgrade-715630 --memory=2200 --driver=docker  --container-runtime=containerd: (1m33.175485454s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-715630
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-715630: (10.266363893s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-715630
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-715630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-715630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.560116271s)
helpers_test.go:175: Cleaning up "missing-upgrade-715630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-715630
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-715630: (2.572427308s)
--- PASS: TestMissingContainerUpgrade (176.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (80.333081ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-090310] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-090310 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-090310 --driver=docker  --container-runtime=containerd: (38.021417657s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-090310 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --driver=docker  --container-runtime=containerd
E1009 19:25:09.874885    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.444643736s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-090310 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-090310 status -o json: exit status 2 (326.186874ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-090310","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-090310
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-090310: (2.000164721s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-090310 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.043782356s)
--- PASS: TestNoKubernetes/serial/Start (5.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-090310 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-090310 "sudo systemctl is-active --quiet service kubelet": exit status 1 (248.16642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-090310
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-090310: (1.212400274s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-090310 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-090310 --driver=docker  --container-runtime=containerd: (7.363583993s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-090310 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-090310 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.333549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3607906383 start -p stopped-upgrade-149766 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3607906383 start -p stopped-upgrade-149766 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.801185679s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3607906383 -p stopped-upgrade-149766 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3607906383 -p stopped-upgrade-149766 stop: (20.34708559s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-149766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-149766 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.743757548s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.89s)

                                                
                                    
x
+
TestPause/serial/Start (78.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-865658 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1009 19:29:21.512805    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-865658 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m18.294260642s)
--- PASS: TestPause/serial/Start (78.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-149766
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-149766: (1.422524443s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-865658 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-865658 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.685364057s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-138657 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-138657 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (200.396232ms)

                                                
                                                
-- stdout --
	* [false-138657] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:30:11.818727  194246 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:30:11.818848  194246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:30:11.818883  194246 out.go:358] Setting ErrFile to fd 2...
	I1009 19:30:11.818895  194246 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:30:11.819172  194246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-2290/.minikube/bin
	I1009 19:30:11.819584  194246 out.go:352] Setting JSON to false
	I1009 19:30:11.820520  194246 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4355,"bootTime":1728497857,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1009 19:30:11.820590  194246 start.go:139] virtualization:  
	I1009 19:30:11.823962  194246 out.go:177] * [false-138657] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1009 19:30:11.827383  194246 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:30:11.827463  194246 notify.go:220] Checking for updates...
	I1009 19:30:11.832502  194246 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:30:11.835036  194246 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-2290/kubeconfig
	I1009 19:30:11.837907  194246 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-2290/.minikube
	I1009 19:30:11.840622  194246 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 19:30:11.843486  194246 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:30:11.846886  194246 config.go:182] Loaded profile config "pause-865658": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1009 19:30:11.847029  194246 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:30:11.871468  194246 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:30:11.871611  194246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:30:11.949331  194246 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-09 19:30:11.938979526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1009 19:30:11.949453  194246 docker.go:318] overlay module found
	I1009 19:30:11.952104  194246 out.go:177] * Using the docker driver based on user configuration
	I1009 19:30:11.954553  194246 start.go:297] selected driver: docker
	I1009 19:30:11.954577  194246 start.go:901] validating driver "docker" against <nil>
	I1009 19:30:11.954592  194246 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:30:11.957610  194246 out.go:201] 
	W1009 19:30:11.960058  194246 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1009 19:30:11.962650  194246 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-138657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:30:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-865658
contexts:
- context:
cluster: pause-865658
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:30:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-865658
name: pause-865658
current-context: pause-865658
kind: Config
preferences: {}
users:
- name: pause-865658
user:
client-certificate: /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/pause-865658/client.crt
client-key: /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/pause-865658/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-138657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-138657"

                                                
                                                
----------------------- debugLogs end: false-138657 [took: 4.898014963s] --------------------------------
helpers_test.go:175: Cleaning up "false-138657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-138657
--- PASS: TestNetworkPlugins/group/false (5.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-865658 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-865658 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-865658 --output=json --layout=cluster: exit status 2 (364.000815ms)

                                                
                                                
-- stdout --
	{"Name":"pause-865658","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-865658","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-865658 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-865658 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-865658 --alsologtostderr -v=5: (1.100264005s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.86s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-865658 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-865658 --alsologtostderr -v=5: (2.86241795s)
--- PASS: TestPause/serial/DeletePaused (2.86s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-865658
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-865658: exit status 1 (22.405525ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-865658: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-135957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1009 19:33:12.942399    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-135957 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.854326511s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-135957 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29103ecd-33b6-49e7-963a-fe18767fe962] Pending
helpers_test.go:344: "busybox" [29103ecd-33b6-49e7-963a-fe18767fe962] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29103ecd-33b6-49e7-963a-fe18767fe962] Running
E1009 19:34:21.512066    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004125229s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-135957 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083200 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083200 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m4.707352809s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-135957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-135957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.436333825s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-135957 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-135957 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-135957 --alsologtostderr -v=3: (12.330517735s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135957 -n old-k8s-version-135957
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-135957 -n old-k8s-version-135957: exit status 7 (124.032237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-135957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083200 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2282a82f-7714-4e28-adfd-8245f8e08498] Pending
helpers_test.go:344: "busybox" [2282a82f-7714-4e28-adfd-8245f8e08498] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2282a82f-7714-4e28-adfd-8245f8e08498] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004712084s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-083200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-083200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.472270083s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-083200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-083200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-083200 --alsologtostderr -v=3: (12.336222532s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200: exit status 7 (71.647995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-083200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083200 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1009 19:37:24.582564    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:21.512588    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083200 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.440059417s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wjbv7" [e8939801-11c2-40ac-9172-544227216f73] Running
E1009 19:40:09.875136    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004349732s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wjbv7" [e8939801-11c2-40ac-9172-544227216f73] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004204275s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-083200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-083200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-083200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200: exit status 2 (316.034701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200: exit status 2 (320.723985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-083200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083200 -n default-k8s-diff-port-083200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-269650 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-269650 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m5.210859759s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5fpql" [4cfa9c09-822c-4c6a-be3d-192c1303a0b8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004561107s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5fpql" [4cfa9c09-822c-4c6a-be3d-192c1303a0b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003648758s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-135957 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-135957 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-135957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135957 -n old-k8s-version-135957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135957 -n old-k8s-version-135957: exit status 2 (336.917983ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-135957 -n old-k8s-version-135957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-135957 -n old-k8s-version-135957: exit status 2 (308.073604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-135957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-135957 -n old-k8s-version-135957
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-135957 -n old-k8s-version-135957
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-890188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-890188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m12.405133306s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-269650 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b38e572-e3da-4162-a8a1-353b2bd6cda5] Pending
helpers_test.go:344: "busybox" [1b38e572-e3da-4162-a8a1-353b2bd6cda5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b38e572-e3da-4162-a8a1-353b2bd6cda5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004727278s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-269650 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-269650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-269650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.338026653s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-269650 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-269650 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-269650 --alsologtostderr -v=3: (12.247339828s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-269650 -n embed-certs-269650
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-269650 -n embed-certs-269650: exit status 7 (78.129039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-269650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-269650 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-269650 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.716287169s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-269650 -n embed-certs-269650
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-890188 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc214883-93c3-4522-bf40-91bab95aa9e0] Pending
helpers_test.go:344: "busybox" [bc214883-93c3-4522-bf40-91bab95aa9e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc214883-93c3-4522-bf40-91bab95aa9e0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004274629s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-890188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-890188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-890188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-890188 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-890188 --alsologtostderr -v=3: (12.05751232s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-890188 -n no-preload-890188
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-890188 -n no-preload-890188: exit status 7 (90.907262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-890188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-890188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1009 19:44:14.203256    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.209652    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.220968    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.242421    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.283800    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.365215    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.527152    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:14.848753    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:15.490790    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:16.772154    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:19.334095    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:21.511683    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/functional-072610/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:24.456058    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:34.697401    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:55.179068    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:09.875080    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.825284    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.831763    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.843106    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.864531    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.905950    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:18.987523    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:19.149533    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:19.471139    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:20.113193    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:21.394578    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:23.955934    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:29.077503    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:36.140808    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:39.319366    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:45:59.800807    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-890188 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m28.124910073s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-890188 -n no-preload-890188
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x7gzf" [844845f9-1c25-4368-90b5-1c09f333eea0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003218406s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x7gzf" [844845f9-1c25-4368-90b5-1c09f333eea0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004149769s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-269650 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-269650 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-269650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-269650 -n embed-certs-269650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-269650 -n embed-certs-269650: exit status 2 (333.09493ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-269650 -n embed-certs-269650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-269650 -n embed-certs-269650: exit status 2 (321.834404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-269650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-269650 -n embed-certs-269650
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-269650 -n embed-certs-269650
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-182666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1009 19:46:58.062268    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-182666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (36.743162403s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-182666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-182666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.657916308s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-182666 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-182666 --alsologtostderr -v=3: (1.288402594s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-182666 -n newest-cni-182666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-182666 -n newest-cni-182666: exit status 7 (72.190146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-182666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-182666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-182666 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (17.796215923s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-182666 -n newest-cni-182666
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jxg8j" [d4f06ed9-1b4d-44e9-a6fd-d5543ebb338c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004628676s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jxg8j" [d4f06ed9-1b4d-44e9-a6fd-d5543ebb338c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00376265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-890188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-890188 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-890188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-890188 --alsologtostderr -v=1: (1.331525261s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-890188 -n no-preload-890188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-890188 -n no-preload-890188: exit status 2 (543.827398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-890188 -n no-preload-890188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-890188 -n no-preload-890188: exit status 2 (494.842146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-890188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-890188 --alsologtostderr -v=1: (1.007256937s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-890188 -n no-preload-890188
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-890188 -n no-preload-890188
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-182666 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-182666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-182666 -n newest-cni-182666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-182666 -n newest-cni-182666: exit status 2 (375.561592ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-182666 -n newest-cni-182666
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-182666 -n newest-cni-182666: exit status 2 (346.747268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-182666 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-182666 -n newest-cni-182666
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-182666 -n newest-cni-182666
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.78s)
E1009 19:53:12.805396    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m9.875675666s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1009 19:48:02.684461    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (56.773827627s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lvzqq" [25b14def-a0c1-46fa-a92c-962add32d8d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004229922s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-138657 "pgrep -a kubelet"
I1009 19:48:49.276841    7596 config.go:182] Loaded profile config "kindnet-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5p8vj" [19e530bf-973e-4988-afea-c833fdb33174] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5p8vj" [19e530bf-973e-4988-afea-c833fdb33174] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003080705s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-138657 "pgrep -a kubelet"
I1009 19:48:51.871472    7596 config.go:182] Loaded profile config "auto-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gc8m5" [eff0e5b4-ea02-4bbe-9216-3cf49a460821] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gc8m5" [eff0e5b4-ea02-4bbe-9216-3cf49a460821] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004584005s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.668140818s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1009 19:49:41.903896    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/old-k8s-version-135957/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:49:52.944108    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:09.874892    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:50:18.825264    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.718023339s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-138657 "pgrep -a kubelet"
I1009 19:50:25.612177    7596 config.go:182] Loaded profile config "custom-flannel-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kqlcq" [fbbe71ed-ad99-4ffe-9220-56da82f15d1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kqlcq" [fbbe71ed-ad99-4ffe-9220-56da82f15d1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004470177s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p2xnc" [988a4b7b-a000-477b-ab53-c668c1eb93e3] Running
E1009 19:50:46.526342    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/default-k8s-diff-port-083200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004729252s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-138657 "pgrep -a kubelet"
I1009 19:50:49.445174    7596 config.go:182] Loaded profile config "calico-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pcwkm" [eee2ff37-368b-4f3f-a658-e729c16fde28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pcwkm" [eee2ff37-368b-4f3f-a658-e729c16fde28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003724047s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (52.779900168s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.240298714s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-138657 "pgrep -a kubelet"
I1009 19:51:50.653897    7596 config.go:182] Loaded profile config "enable-default-cni-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-llqml" [4069dc48-6d47-4654-b84d-73018f75a226] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-llqml" [4069dc48-6d47-4654-b84d-73018f75a226] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.028218945s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-138657 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.219181933s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5nhdp" [32e236e4-9e49-4b59-b77d-b1e3b1e9df8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003856153s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-138657 "pgrep -a kubelet"
I1009 19:52:28.984217    7596 config.go:182] Loaded profile config "flannel-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qfgsp" [f1a5f8db-70f9-4e99-8814-2f72c3b33352] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 19:52:31.829766    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:31.836104    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:31.847426    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:31.868740    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:31.910095    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:31.992266    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:32.153536    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:32.474968    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:33.116817    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qfgsp" [f1a5f8db-70f9-4e99-8814-2f72c3b33352] Running
E1009 19:52:34.398433    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:52:36.959730    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/no-preload-890188/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.011405152s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-138657 "pgrep -a kubelet"
I1009 19:53:38.514988    7596 config.go:182] Loaded profile config "bridge-138657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-138657 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9b598" [d0cb2c9b-c0d6-4e06-a9af-52d17a4ad855] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 19:53:42.922408    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:42.928725    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:42.940060    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:42.961434    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:43.002976    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:43.084657    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:43.246231    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:43.568025    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9b598" [d0cb2c9b-c0d6-4e06-a9af-52d17a4ad855] Running
E1009 19:53:44.210087    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:45.491497    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:53:48.053396    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/kindnet-138657/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003917102s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-138657 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-138657 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-334728 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-334728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-334728
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-732769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-732769
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E1009 19:30:09.874981    7596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/addons-514774/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: kubenet-138657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-2290/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:29:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-865658
contexts:
- context:
cluster: pause-865658
extensions:
- extension:
last-update: Wed, 09 Oct 2024 19:29:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-865658
name: pause-865658
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-865658
user:
client-certificate: /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/pause-865658/client.crt
client-key: /home/jenkins/minikube-integration/19780-2290/.minikube/profiles/pause-865658/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-138657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-138657"

                                                
                                                
----------------------- debugLogs end: kubenet-138657 [took: 4.413278372s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-138657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-138657
--- SKIP: TestNetworkPlugins/group/kubenet (4.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-138657 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-138657" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-138657

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-138657" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-138657"

                                                
                                                
----------------------- debugLogs end: cilium-138657 [took: 5.081093281s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-138657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-138657
--- SKIP: TestNetworkPlugins/group/cilium (5.28s)

                                                
                                    
Copied to clipboard