Test Report: Docker_Linux_containerd_arm64 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.8
302 TestStartStop/group/old-k8s-version/serial/SecondStart 381.66
x
+
TestAddons/serial/Volcano (199.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 49.890576ms
addons_test.go:897: volcano-scheduler stabilized in 50.030546ms
addons_test.go:913: volcano-controller stabilized in 50.599545ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-kl2wz" [903b43fc-c978-4b58-8950-c7ebd7c17efa] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003678669s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-pdtvn" [6b1c4755-0223-4ac0-8f13-0740d301d0b7] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003501486s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jsjzz" [d2f9fc11-3062-4004-b9ac-92b01c478365] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003472325s
addons_test.go:932: (dbg) Run:  kubectl --context addons-509957 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-509957 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-509957 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c713ae65-190e-446e-b66b-7f7cf07de45c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-509957 -n addons-509957
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-12 22:37:19.130163691 +0000 UTC m=+486.574323664
addons_test.go:964: (dbg) Run:  kubectl --context addons-509957 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-509957 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-6ab507a4-839e-4d55-9c4a-aec1649d0033
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bf4fh (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-bf4fh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age   From     Message
----     ------            ----  ----     -------
Warning  FailedScheduling  3m    volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-509957 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-509957 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-509957
helpers_test.go:235: (dbg) docker inspect addons-509957:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb",
	        "Created": "2024-09-12T22:30:00.548990151Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1599013,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T22:30:00.730700508Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5a18b2e89815d9320db97822722b50bf88d564940d3d81fe93adf39e9c88570e",
	        "ResolvConfPath": "/var/lib/docker/containers/46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb/hostname",
	        "HostsPath": "/var/lib/docker/containers/46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb/hosts",
	        "LogPath": "/var/lib/docker/containers/46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb/46f48a1a2f911c28fd4baee3307f36fd4486437aaeaeb81e52ad5c8beac7facb-json.log",
	        "Name": "/addons-509957",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-509957:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-509957",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c16306769ea47e31986acec1873bea222e16f35a0fbd97efe3aa2e2e421e1b73-init/diff:/var/lib/docker/overlay2/22619844066f8062a761e6c26d439ab232db1d4015e623ac6dd91ab5ce435ce2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c16306769ea47e31986acec1873bea222e16f35a0fbd97efe3aa2e2e421e1b73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c16306769ea47e31986acec1873bea222e16f35a0fbd97efe3aa2e2e421e1b73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c16306769ea47e31986acec1873bea222e16f35a0fbd97efe3aa2e2e421e1b73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-509957",
	                "Source": "/var/lib/docker/volumes/addons-509957/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-509957",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-509957",
	                "name.minikube.sigs.k8s.io": "addons-509957",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a37f183e2ebc491e57ec8282e91a17be54553611b7fc30da2afdaa6b1a2f9ab3",
	            "SandboxKey": "/var/run/docker/netns/a37f183e2ebc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34639"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34640"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34643"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34641"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34642"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-509957": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0d12b05c386e43ae050aff51af112d28b7af4be9161b86beb34fdb79594e530c",
	                    "EndpointID": "37845e9b14f05b4cd5ec780d6c04287307eed97b761d38717936d0c58b863ff5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-509957",
	                        "46f48a1a2f91"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-509957 -n addons-509957
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 logs -n 25: (1.566313508s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-570075   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | -p download-only-570075              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| delete  | -p download-only-570075              | download-only-570075   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| start   | -o=json --download-only              | download-only-754658   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | -p download-only-754658              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| delete  | -p download-only-754658              | download-only-754658   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| delete  | -p download-only-570075              | download-only-570075   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| delete  | -p download-only-754658              | download-only-754658   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| start   | --download-only -p                   | download-docker-139580 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | download-docker-139580               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-139580            | download-docker-139580 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| start   | --download-only -p                   | binary-mirror-332333   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | binary-mirror-332333                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43185               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-332333              | binary-mirror-332333   | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| addons  | enable dashboard -p                  | addons-509957          | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | addons-509957                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-509957          | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | addons-509957                        |                        |         |         |                     |                     |
	| start   | -p addons-509957 --wait=true         | addons-509957          | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:29:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:29:34.947147 1598521 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:29:34.947286 1598521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:34.947297 1598521 out.go:358] Setting ErrFile to fd 2...
	I0912 22:29:34.947303 1598521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:34.947566 1598521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:29:34.948060 1598521 out.go:352] Setting JSON to false
	I0912 22:29:34.948976 1598521 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25902,"bootTime":1726154273,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 22:29:34.949054 1598521 start.go:139] virtualization:  
	I0912 22:29:34.951586 1598521 out.go:177] * [addons-509957] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 22:29:34.954282 1598521 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:29:34.954432 1598521 notify.go:220] Checking for updates...
	I0912 22:29:34.958653 1598521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:29:34.960533 1598521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:29:34.962410 1598521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 22:29:34.964431 1598521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 22:29:34.966412 1598521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:29:34.968841 1598521 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:29:34.990605 1598521 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:29:34.990729 1598521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:35.056428 1598521 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 22:29:35.04672431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:35.056549 1598521 docker.go:318] overlay module found
	I0912 22:29:35.058719 1598521 out.go:177] * Using the docker driver based on user configuration
	I0912 22:29:35.060581 1598521 start.go:297] selected driver: docker
	I0912 22:29:35.060601 1598521 start.go:901] validating driver "docker" against <nil>
	I0912 22:29:35.060614 1598521 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:29:35.061236 1598521 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:35.115090 1598521 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 22:29:35.105183157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:35.115269 1598521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:29:35.115533 1598521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:29:35.117684 1598521 out.go:177] * Using Docker driver with root privileges
	I0912 22:29:35.121297 1598521 cni.go:84] Creating CNI manager for ""
	I0912 22:29:35.121320 1598521 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 22:29:35.121330 1598521 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 22:29:35.121415 1598521 start.go:340] cluster config:
	{Name:addons-509957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-509957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:35.124370 1598521 out.go:177] * Starting "addons-509957" primary control-plane node in "addons-509957" cluster
	I0912 22:29:35.126682 1598521 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0912 22:29:35.128742 1598521 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 22:29:35.130668 1598521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 22:29:35.130721 1598521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0912 22:29:35.130732 1598521 cache.go:56] Caching tarball of preloaded images
	I0912 22:29:35.130777 1598521 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 22:29:35.130824 1598521 preload.go:172] Found /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 22:29:35.130850 1598521 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0912 22:29:35.131229 1598521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/config.json ...
	I0912 22:29:35.131302 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/config.json: {Name:mkffe822a0f4842207ae08c3b29884931c537261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:29:35.146346 1598521 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 22:29:35.146483 1598521 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 22:29:35.146509 1598521 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 22:29:35.146519 1598521 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 22:29:35.146527 1598521 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 22:29:35.146537 1598521 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 22:29:52.309188 1598521 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 22:29:52.309224 1598521 cache.go:194] Successfully downloaded all kic artifacts
	I0912 22:29:52.309260 1598521 start.go:360] acquireMachinesLock for addons-509957: {Name:mka5c9ee34022422cc920c8eb35c09ad14d8aadb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:29:52.309391 1598521 start.go:364] duration metric: took 105.279µs to acquireMachinesLock for "addons-509957"
	I0912 22:29:52.309422 1598521 start.go:93] Provisioning new machine with config: &{Name:addons-509957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-509957 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 22:29:52.309519 1598521 start.go:125] createHost starting for "" (driver="docker")
	I0912 22:29:52.312182 1598521 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 22:29:52.312431 1598521 start.go:159] libmachine.API.Create for "addons-509957" (driver="docker")
	I0912 22:29:52.312472 1598521 client.go:168] LocalClient.Create starting
	I0912 22:29:52.312582 1598521 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem
	I0912 22:29:53.230689 1598521 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem
	I0912 22:29:54.034279 1598521 cli_runner.go:164] Run: docker network inspect addons-509957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 22:29:54.050351 1598521 cli_runner.go:211] docker network inspect addons-509957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 22:29:54.050440 1598521 network_create.go:284] running [docker network inspect addons-509957] to gather additional debugging logs...
	I0912 22:29:54.050462 1598521 cli_runner.go:164] Run: docker network inspect addons-509957
	W0912 22:29:54.065558 1598521 cli_runner.go:211] docker network inspect addons-509957 returned with exit code 1
	I0912 22:29:54.065595 1598521 network_create.go:287] error running [docker network inspect addons-509957]: docker network inspect addons-509957: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-509957 not found
	I0912 22:29:54.065610 1598521 network_create.go:289] output of [docker network inspect addons-509957]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-509957 not found
	
	** /stderr **
	I0912 22:29:54.065752 1598521 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:29:54.082879 1598521 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000490d60}
	I0912 22:29:54.082923 1598521 network_create.go:124] attempt to create docker network addons-509957 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0912 22:29:54.082988 1598521 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-509957 addons-509957
	I0912 22:29:54.151224 1598521 network_create.go:108] docker network addons-509957 192.168.49.0/24 created
	I0912 22:29:54.151255 1598521 kic.go:121] calculated static IP "192.168.49.2" for the "addons-509957" container
	I0912 22:29:54.151333 1598521 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 22:29:54.167446 1598521 cli_runner.go:164] Run: docker volume create addons-509957 --label name.minikube.sigs.k8s.io=addons-509957 --label created_by.minikube.sigs.k8s.io=true
	I0912 22:29:54.184482 1598521 oci.go:103] Successfully created a docker volume addons-509957
	I0912 22:29:54.184583 1598521 cli_runner.go:164] Run: docker run --rm --name addons-509957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-509957 --entrypoint /usr/bin/test -v addons-509957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
	I0912 22:29:56.264134 1598521 cli_runner.go:217] Completed: docker run --rm --name addons-509957-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-509957 --entrypoint /usr/bin/test -v addons-509957:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (2.079507916s)
	I0912 22:29:56.264163 1598521 oci.go:107] Successfully prepared a docker volume addons-509957
	I0912 22:29:56.264184 1598521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 22:29:56.264202 1598521 kic.go:194] Starting extracting preloaded images to volume ...
	I0912 22:29:56.264295 1598521 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-509957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 22:30:00.320737 1598521 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-509957:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (4.05640261s)
	I0912 22:30:00.320775 1598521 kic.go:203] duration metric: took 4.056568525s to extract preloaded images to volume ...
	W0912 22:30:00.320928 1598521 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 22:30:00.321083 1598521 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 22:30:00.513186 1598521 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-509957 --name addons-509957 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-509957 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-509957 --network addons-509957 --ip 192.168.49.2 --volume addons-509957:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
	I0912 22:30:00.922955 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Running}}
	I0912 22:30:00.947043 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:00.972736 1598521 cli_runner.go:164] Run: docker exec addons-509957 stat /var/lib/dpkg/alternatives/iptables
	I0912 22:30:01.043591 1598521 oci.go:144] the created container "addons-509957" has a running status.
	I0912 22:30:01.043629 1598521 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa...
	I0912 22:30:01.696685 1598521 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 22:30:01.717113 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:01.744413 1598521 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 22:30:01.744434 1598521 kic_runner.go:114] Args: [docker exec --privileged addons-509957 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 22:30:01.807964 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:01.825701 1598521 machine.go:93] provisionDockerMachine start ...
	I0912 22:30:01.825797 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:01.846203 1598521 main.go:141] libmachine: Using SSH client type: native
	I0912 22:30:01.846569 1598521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34639 <nil> <nil>}
	I0912 22:30:01.846626 1598521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 22:30:02.016399 1598521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-509957
	
	I0912 22:30:02.016489 1598521 ubuntu.go:169] provisioning hostname "addons-509957"
	I0912 22:30:02.016592 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:02.038863 1598521 main.go:141] libmachine: Using SSH client type: native
	I0912 22:30:02.039098 1598521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34639 <nil> <nil>}
	I0912 22:30:02.039109 1598521 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-509957 && echo "addons-509957" | sudo tee /etc/hostname
	I0912 22:30:02.200730 1598521 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-509957
	
	I0912 22:30:02.200812 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:02.218177 1598521 main.go:141] libmachine: Using SSH client type: native
	I0912 22:30:02.218445 1598521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34639 <nil> <nil>}
	I0912 22:30:02.218467 1598521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-509957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-509957/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-509957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:30:02.359640 1598521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:30:02.359664 1598521 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-1592376/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-1592376/.minikube}
	I0912 22:30:02.359683 1598521 ubuntu.go:177] setting up certificates
	I0912 22:30:02.359694 1598521 provision.go:84] configureAuth start
	I0912 22:30:02.359779 1598521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-509957
	I0912 22:30:02.377681 1598521 provision.go:143] copyHostCerts
	I0912 22:30:02.377755 1598521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem (1082 bytes)
	I0912 22:30:02.377880 1598521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem (1123 bytes)
	I0912 22:30:02.377943 1598521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem (1675 bytes)
	I0912 22:30:02.378000 1598521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem org=jenkins.addons-509957 san=[127.0.0.1 192.168.49.2 addons-509957 localhost minikube]
	I0912 22:30:03.702236 1598521 provision.go:177] copyRemoteCerts
	I0912 22:30:03.702315 1598521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:30:03.702359 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:03.727459 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:03.828661 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:30:03.852561 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 22:30:03.876131 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 22:30:03.901518 1598521 provision.go:87] duration metric: took 1.541810869s to configureAuth
	I0912 22:30:03.901550 1598521 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:30:03.901775 1598521 config.go:182] Loaded profile config "addons-509957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:30:03.901791 1598521 machine.go:96] duration metric: took 2.076069751s to provisionDockerMachine
	I0912 22:30:03.901799 1598521 client.go:171] duration metric: took 11.589315013s to LocalClient.Create
	I0912 22:30:03.901825 1598521 start.go:167] duration metric: took 11.589396375s to libmachine.API.Create "addons-509957"
	I0912 22:30:03.901839 1598521 start.go:293] postStartSetup for "addons-509957" (driver="docker")
	I0912 22:30:03.901850 1598521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:30:03.901918 1598521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:30:03.901963 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:03.919099 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:04.024042 1598521 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:30:04.028294 1598521 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:30:04.028336 1598521 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:30:04.028347 1598521 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:30:04.028375 1598521 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 22:30:04.028388 1598521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/addons for local assets ...
	I0912 22:30:04.028478 1598521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/files for local assets ...
	I0912 22:30:04.028507 1598521 start.go:296] duration metric: took 126.660737ms for postStartSetup
	I0912 22:30:04.028892 1598521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-509957
	I0912 22:30:04.051054 1598521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/config.json ...
	I0912 22:30:04.051388 1598521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:30:04.051444 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:04.073670 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:04.173519 1598521 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:30:04.177888 1598521 start.go:128] duration metric: took 11.868351859s to createHost
	I0912 22:30:04.177911 1598521 start.go:83] releasing machines lock for "addons-509957", held for 11.868507239s
	I0912 22:30:04.177985 1598521 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-509957
	I0912 22:30:04.197008 1598521 ssh_runner.go:195] Run: cat /version.json
	I0912 22:30:04.197030 1598521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:30:04.197064 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:04.197067 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:04.217418 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:04.223637 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:04.451939 1598521 ssh_runner.go:195] Run: systemctl --version
	I0912 22:30:04.456069 1598521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:30:04.460176 1598521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 22:30:04.484189 1598521 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:30:04.484321 1598521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:30:04.514313 1598521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 22:30:04.514336 1598521 start.go:495] detecting cgroup driver to use...
	I0912 22:30:04.514369 1598521 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 22:30:04.514425 1598521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 22:30:04.526887 1598521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 22:30:04.538508 1598521 docker.go:217] disabling cri-docker service (if available) ...
	I0912 22:30:04.538572 1598521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:30:04.551515 1598521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:30:04.566555 1598521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:30:04.656253 1598521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:30:04.752669 1598521 docker.go:233] disabling docker service ...
	I0912 22:30:04.752749 1598521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:30:04.772861 1598521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:30:04.784607 1598521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:30:04.890587 1598521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:30:04.988246 1598521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:30:05.000572 1598521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:30:05.036382 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 22:30:05.047779 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 22:30:05.058847 1598521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 22:30:05.058921 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 22:30:05.070328 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 22:30:05.082441 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 22:30:05.094418 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 22:30:05.106459 1598521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:30:05.117194 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 22:30:05.128657 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 22:30:05.139391 1598521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 22:30:05.151217 1598521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:30:05.160449 1598521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:30:05.169694 1598521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:30:05.253026 1598521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 22:30:05.392890 1598521 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0912 22:30:05.393003 1598521 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 22:30:05.396653 1598521 start.go:563] Will wait 60s for crictl version
	I0912 22:30:05.396736 1598521 ssh_runner.go:195] Run: which crictl
	I0912 22:30:05.400156 1598521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:30:05.439271 1598521 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0912 22:30:05.439378 1598521 ssh_runner.go:195] Run: containerd --version
	I0912 22:30:05.461145 1598521 ssh_runner.go:195] Run: containerd --version
	I0912 22:30:05.485284 1598521 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0912 22:30:05.487139 1598521 cli_runner.go:164] Run: docker network inspect addons-509957 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:30:05.501737 1598521 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 22:30:05.505293 1598521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:30:05.516437 1598521 kubeadm.go:883] updating cluster {Name:addons-509957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-509957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 22:30:05.516560 1598521 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 22:30:05.516640 1598521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:30:05.553607 1598521 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 22:30:05.553632 1598521 containerd.go:534] Images already preloaded, skipping extraction
	I0912 22:30:05.553692 1598521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:30:05.590913 1598521 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 22:30:05.590937 1598521 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:30:05.590955 1598521 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0912 22:30:05.591059 1598521 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-509957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-509957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 22:30:05.591132 1598521 ssh_runner.go:195] Run: sudo crictl info
	I0912 22:30:05.627402 1598521 cni.go:84] Creating CNI manager for ""
	I0912 22:30:05.627428 1598521 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 22:30:05.627437 1598521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 22:30:05.627463 1598521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-509957 NodeName:addons-509957 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:30:05.627597 1598521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-509957"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:30:05.627669 1598521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 22:30:05.636433 1598521 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:30:05.636502 1598521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:30:05.645164 1598521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0912 22:30:05.663068 1598521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:30:05.681511 1598521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0912 22:30:05.699800 1598521 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:30:05.703370 1598521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:30:05.714193 1598521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:30:05.797782 1598521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:30:05.813267 1598521 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957 for IP: 192.168.49.2
	I0912 22:30:05.813327 1598521 certs.go:194] generating shared ca certs ...
	I0912 22:30:05.813370 1598521 certs.go:226] acquiring lock for ca certs: {Name:mk5b7cca91a053f0ec1ca9c487c600f7eefaa6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:05.813546 1598521 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key
	I0912 22:30:06.839473 1598521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt ...
	I0912 22:30:06.839507 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt: {Name:mkee9c2664c76b6968612fdd66a8cf864e3cc6ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:06.839773 1598521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key ...
	I0912 22:30:06.839791 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key: {Name:mk4ea9a15b2ec3a1d5b6d510288ace12c9870ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:06.840500 1598521 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key
	I0912 22:30:07.358462 1598521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.crt ...
	I0912 22:30:07.358497 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.crt: {Name:mk0d391a87c469f6d4ec924a7150a63712b268fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:07.359070 1598521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key ...
	I0912 22:30:07.359089 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key: {Name:mke67c70be456ab1198b130e315f6a75a7603da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:07.359180 1598521 certs.go:256] generating profile certs ...
	I0912 22:30:07.359244 1598521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.key
	I0912 22:30:07.359262 1598521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt with IP's: []
	I0912 22:30:07.667638 1598521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt ...
	I0912 22:30:07.667672 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: {Name:mk5f2cd2d24f64dcf2854faeb0e1fb3faaca0234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:07.667875 1598521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.key ...
	I0912 22:30:07.667891 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.key: {Name:mkdd3cb86d26ff6e880235f204dd76bdcea9e472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:07.669090 1598521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key.db28b86b
	I0912 22:30:07.669117 1598521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt.db28b86b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0912 22:30:08.228987 1598521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt.db28b86b ...
	I0912 22:30:08.229021 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt.db28b86b: {Name:mkcdb28adb38a4cfe57d63293b5e4731ed1cb41c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:08.229216 1598521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key.db28b86b ...
	I0912 22:30:08.229232 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key.db28b86b: {Name:mk32b71cc5557fbf542a041ce0466775cf9a6987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:08.229319 1598521 certs.go:381] copying /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt.db28b86b -> /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt
	I0912 22:30:08.229413 1598521 certs.go:385] copying /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key.db28b86b -> /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key
	I0912 22:30:08.229470 1598521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.key
	I0912 22:30:08.229494 1598521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.crt with IP's: []
	I0912 22:30:08.582626 1598521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.crt ...
	I0912 22:30:08.582658 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.crt: {Name:mk90181876db8076169cceb80fd2a0bb0655d453 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:08.583342 1598521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.key ...
	I0912 22:30:08.583406 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.key: {Name:mk8594328e6108700087f7ac85a50d3e2998f4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:08.584155 1598521 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 22:30:08.584202 1598521 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:30:08.584233 1598521 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:30:08.584268 1598521 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem (1675 bytes)
	I0912 22:30:08.584880 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:30:08.609903 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:30:08.634652 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:30:08.658806 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 22:30:08.682782 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 22:30:08.706778 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 22:30:08.730776 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:30:08.754034 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:30:08.785041 1598521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:30:08.809285 1598521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:30:08.827271 1598521 ssh_runner.go:195] Run: openssl version
	I0912 22:30:08.833480 1598521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:30:08.845824 1598521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:30:08.849417 1598521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 22:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:30:08.849510 1598521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:30:08.856602 1598521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:30:08.865938 1598521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 22:30:08.869264 1598521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 22:30:08.869336 1598521 kubeadm.go:392] StartCluster: {Name:addons-509957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-509957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:30:08.869437 1598521 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0912 22:30:08.869502 1598521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:30:08.907578 1598521 cri.go:89] found id: ""
	I0912 22:30:08.907655 1598521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:30:08.916623 1598521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:30:08.925807 1598521 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0912 22:30:08.925876 1598521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:30:08.935010 1598521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:30:08.935031 1598521 kubeadm.go:157] found existing configuration files:
	
	I0912 22:30:08.935107 1598521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:30:08.943945 1598521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 22:30:08.944033 1598521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 22:30:08.952579 1598521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:30:08.961695 1598521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 22:30:08.961760 1598521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 22:30:08.970084 1598521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:30:08.978916 1598521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 22:30:08.978990 1598521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:30:08.987879 1598521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:30:08.996672 1598521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 22:30:08.996768 1598521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:30:09.006661 1598521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 22:30:09.057088 1598521 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 22:30:09.057210 1598521 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 22:30:09.075058 1598521 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:30:09.075174 1598521 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0912 22:30:09.075237 1598521 kubeadm.go:310] OS: Linux
	I0912 22:30:09.075313 1598521 kubeadm.go:310] CGROUPS_CPU: enabled
	I0912 22:30:09.075392 1598521 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0912 22:30:09.075469 1598521 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0912 22:30:09.075538 1598521 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0912 22:30:09.075605 1598521 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0912 22:30:09.075680 1598521 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0912 22:30:09.075780 1598521 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0912 22:30:09.075861 1598521 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0912 22:30:09.075939 1598521 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0912 22:30:09.140467 1598521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:30:09.140579 1598521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:30:09.140673 1598521 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 22:30:09.146656 1598521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:30:09.149995 1598521 out.go:235]   - Generating certificates and keys ...
	I0912 22:30:09.150194 1598521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 22:30:09.150308 1598521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 22:30:09.451262 1598521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:30:09.820834 1598521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:30:10.123994 1598521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:30:10.791360 1598521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 22:30:11.278784 1598521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 22:30:11.279110 1598521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-509957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 22:30:11.855458 1598521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 22:30:11.855617 1598521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-509957 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 22:30:12.314248 1598521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:30:12.892498 1598521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:30:13.119780 1598521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 22:30:13.120128 1598521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:30:13.552490 1598521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:30:13.900268 1598521 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 22:30:14.793711 1598521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:30:15.207512 1598521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:30:16.018113 1598521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:30:16.019260 1598521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:30:16.024420 1598521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:30:16.026717 1598521 out.go:235]   - Booting up control plane ...
	I0912 22:30:16.026818 1598521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:30:16.026894 1598521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:30:16.026958 1598521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:30:16.038907 1598521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:30:16.045181 1598521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:30:16.045244 1598521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 22:30:16.144370 1598521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 22:30:16.144506 1598521 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 22:30:17.640781 1598521 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501673077s
	I0912 22:30:17.640868 1598521 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 22:30:23.644849 1598521 kubeadm.go:310] [api-check] The API server is healthy after 6.001934553s
	I0912 22:30:23.664179 1598521 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 22:30:23.691339 1598521 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 22:30:23.724420 1598521 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 22:30:23.724619 1598521 kubeadm.go:310] [mark-control-plane] Marking the node addons-509957 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 22:30:23.740303 1598521 kubeadm.go:310] [bootstrap-token] Using token: 25ct6u.fpl8erka19d923f1
	I0912 22:30:23.742322 1598521 out.go:235]   - Configuring RBAC rules ...
	I0912 22:30:23.742446 1598521 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 22:30:23.750815 1598521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 22:30:23.759840 1598521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 22:30:23.763767 1598521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 22:30:23.768253 1598521 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 22:30:23.772274 1598521 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 22:30:24.049471 1598521 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 22:30:24.478428 1598521 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 22:30:25.051557 1598521 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 22:30:25.052868 1598521 kubeadm.go:310] 
	I0912 22:30:25.052947 1598521 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 22:30:25.052959 1598521 kubeadm.go:310] 
	I0912 22:30:25.053044 1598521 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 22:30:25.053055 1598521 kubeadm.go:310] 
	I0912 22:30:25.053080 1598521 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 22:30:25.053146 1598521 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 22:30:25.053201 1598521 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 22:30:25.053210 1598521 kubeadm.go:310] 
	I0912 22:30:25.053263 1598521 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 22:30:25.053272 1598521 kubeadm.go:310] 
	I0912 22:30:25.053324 1598521 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 22:30:25.053331 1598521 kubeadm.go:310] 
	I0912 22:30:25.053381 1598521 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 22:30:25.053457 1598521 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 22:30:25.053527 1598521 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 22:30:25.053531 1598521 kubeadm.go:310] 
	I0912 22:30:25.053612 1598521 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 22:30:25.053686 1598521 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 22:30:25.053691 1598521 kubeadm.go:310] 
	I0912 22:30:25.053772 1598521 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 25ct6u.fpl8erka19d923f1 \
	I0912 22:30:25.053871 1598521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:836f3f83371b53325f5cfe3d1d18642045028ffcee7ce46bed58b88d4493f748 \
	I0912 22:30:25.053893 1598521 kubeadm.go:310] 	--control-plane 
	I0912 22:30:25.053898 1598521 kubeadm.go:310] 
	I0912 22:30:25.053979 1598521 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 22:30:25.053984 1598521 kubeadm.go:310] 
	I0912 22:30:25.054062 1598521 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 25ct6u.fpl8erka19d923f1 \
	I0912 22:30:25.054161 1598521 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:836f3f83371b53325f5cfe3d1d18642045028ffcee7ce46bed58b88d4493f748 
	I0912 22:30:25.058651 1598521 kubeadm.go:310] W0912 22:30:09.049781    1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 22:30:25.058943 1598521 kubeadm.go:310] W0912 22:30:09.050709    1031 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 22:30:25.059153 1598521 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0912 22:30:25.059257 1598521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:30:25.059279 1598521 cni.go:84] Creating CNI manager for ""
	I0912 22:30:25.059294 1598521 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 22:30:25.062887 1598521 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 22:30:25.064775 1598521 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:30:25.069184 1598521 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0912 22:30:25.069210 1598521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0912 22:30:25.092194 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:30:25.405849 1598521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 22:30:25.405978 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:25.406065 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-509957 minikube.k8s.io/updated_at=2024_09_12T22_30_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-509957 minikube.k8s.io/primary=true
	I0912 22:30:25.621410 1598521 ops.go:34] apiserver oom_adj: -16
	I0912 22:30:25.621497 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:26.121689 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:26.622118 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:27.122634 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:27.622361 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:28.121652 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:28.622150 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:29.122327 1598521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:30:29.239288 1598521 kubeadm.go:1113] duration metric: took 3.833351784s to wait for elevateKubeSystemPrivileges
	I0912 22:30:29.239323 1598521 kubeadm.go:394] duration metric: took 20.370014031s to StartCluster
	I0912 22:30:29.239342 1598521 settings.go:142] acquiring lock: {Name:mk1fdbbc4ffc0e3fc6419399beeda4839e1c5a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:29.239848 1598521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:30:29.240270 1598521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/kubeconfig: {Name:mk20814b10c438de6fa8214738e210df331cf1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:30:29.240799 1598521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:30:29.240835 1598521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 22:30:29.241086 1598521 config.go:182] Loaded profile config "addons-509957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:30:29.241119 1598521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 22:30:29.241192 1598521 addons.go:69] Setting yakd=true in profile "addons-509957"
	I0912 22:30:29.241213 1598521 addons.go:234] Setting addon yakd=true in "addons-509957"
	I0912 22:30:29.241238 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.241686 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.242199 1598521 addons.go:69] Setting cloud-spanner=true in profile "addons-509957"
	I0912 22:30:29.242228 1598521 addons.go:234] Setting addon cloud-spanner=true in "addons-509957"
	I0912 22:30:29.242264 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.242682 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.242980 1598521 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-509957"
	I0912 22:30:29.243037 1598521 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-509957"
	I0912 22:30:29.243076 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.243548 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.245565 1598521 addons.go:69] Setting registry=true in profile "addons-509957"
	I0912 22:30:29.245600 1598521 addons.go:234] Setting addon registry=true in "addons-509957"
	I0912 22:30:29.245642 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.246054 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.248848 1598521 addons.go:69] Setting storage-provisioner=true in profile "addons-509957"
	I0912 22:30:29.248882 1598521 addons.go:234] Setting addon storage-provisioner=true in "addons-509957"
	I0912 22:30:29.248920 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.251612 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.252577 1598521 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-509957"
	I0912 22:30:29.265820 1598521 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-509957"
	I0912 22:30:29.265909 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.266484 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.274284 1598521 addons.go:69] Setting default-storageclass=true in profile "addons-509957"
	I0912 22:30:29.283817 1598521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-509957"
	I0912 22:30:29.284215 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.261068 1598521 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-509957"
	I0912 22:30:29.289075 1598521 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-509957"
	I0912 22:30:29.289433 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.261082 1598521 addons.go:69] Setting volcano=true in profile "addons-509957"
	I0912 22:30:29.299140 1598521 addons.go:234] Setting addon volcano=true in "addons-509957"
	I0912 22:30:29.299190 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.299633 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.312194 1598521 addons.go:69] Setting gcp-auth=true in profile "addons-509957"
	I0912 22:30:29.312290 1598521 mustload.go:65] Loading cluster: addons-509957
	I0912 22:30:29.261089 1598521 addons.go:69] Setting volumesnapshots=true in profile "addons-509957"
	I0912 22:30:29.312658 1598521 addons.go:234] Setting addon volumesnapshots=true in "addons-509957"
	I0912 22:30:29.312690 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.313128 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.314114 1598521 config.go:182] Loaded profile config "addons-509957": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:30:29.314428 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.265661 1598521 out.go:177] * Verifying Kubernetes components...
	I0912 22:30:29.335757 1598521 addons.go:69] Setting ingress=true in profile "addons-509957"
	I0912 22:30:29.335853 1598521 addons.go:234] Setting addon ingress=true in "addons-509957"
	I0912 22:30:29.335932 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.336490 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.336618 1598521 addons.go:69] Setting ingress-dns=true in profile "addons-509957"
	I0912 22:30:29.336658 1598521 addons.go:234] Setting addon ingress-dns=true in "addons-509957"
	I0912 22:30:29.336715 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.339113 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.339263 1598521 addons.go:69] Setting inspektor-gadget=true in profile "addons-509957"
	I0912 22:30:29.339306 1598521 addons.go:234] Setting addon inspektor-gadget=true in "addons-509957"
	I0912 22:30:29.339350 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.339865 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.364651 1598521 addons.go:69] Setting metrics-server=true in profile "addons-509957"
	I0912 22:30:29.364699 1598521 addons.go:234] Setting addon metrics-server=true in "addons-509957"
	I0912 22:30:29.364737 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.365234 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.365423 1598521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:30:29.370548 1598521 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 22:30:29.378553 1598521 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 22:30:29.400185 1598521 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 22:30:29.400258 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 22:30:29.400354 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.419452 1598521 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 22:30:29.419516 1598521 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 22:30:29.419626 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.425813 1598521 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 22:30:29.427802 1598521 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 22:30:29.427825 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 22:30:29.427895 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.447551 1598521 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 22:30:29.451828 1598521 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 22:30:29.454432 1598521 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 22:30:29.454499 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 22:30:29.454608 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.467334 1598521 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-509957"
	I0912 22:30:29.467375 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.471091 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.487047 1598521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:30:29.489687 1598521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:30:29.489710 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 22:30:29.489783 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.524351 1598521 addons.go:234] Setting addon default-storageclass=true in "addons-509957"
	I0912 22:30:29.524395 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.524807 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:29.543026 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 22:30:29.569036 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 22:30:29.570639 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 22:30:29.570677 1598521 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 22:30:29.570766 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.592465 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:29.594109 1598521 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 22:30:29.644204 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 22:30:29.646257 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 22:30:29.649116 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 22:30:29.651322 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 22:30:29.653545 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 22:30:29.661228 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.662019 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.662534 1598521 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 22:30:29.663620 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.669984 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.677314 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 22:30:29.687128 1598521 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 22:30:29.687322 1598521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 22:30:29.687398 1598521 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 22:30:29.689333 1598521 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 22:30:29.691495 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 22:30:29.691521 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 22:30:29.691596 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.691778 1598521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 22:30:29.691824 1598521 out.go:177]   - Using image docker.io/busybox:stable
	I0912 22:30:29.692344 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 22:30:29.692427 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 22:30:29.692511 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.706863 1598521 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 22:30:29.707358 1598521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 22:30:29.707377 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 22:30:29.707444 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.710128 1598521 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 22:30:29.710200 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 22:30:29.710302 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.731896 1598521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 22:30:29.737409 1598521 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 22:30:29.737434 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 22:30:29.737504 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.745955 1598521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 22:30:29.746020 1598521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 22:30:29.746910 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.764876 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.766451 1598521 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 22:30:29.767987 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.768440 1598521 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 22:30:29.770385 1598521 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 22:30:29.770403 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 22:30:29.770463 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.770646 1598521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 22:30:29.770655 1598521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 22:30:29.770701 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:29.844812 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.871412 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.891053 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.892145 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.897625 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.908216 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.909113 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:29.910458 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	W0912 22:30:29.910567 1598521 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0912 22:30:29.910601 1598521 retry.go:31] will retry after 172.208709ms: ssh: handshake failed: EOF
	I0912 22:30:29.994247 1598521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 22:30:29.994503 1598521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0912 22:30:30.091285 1598521 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0912 22:30:30.091476 1598521 retry.go:31] will retry after 481.605882ms: ssh: handshake failed: EOF
	I0912 22:30:30.201606 1598521 node_ready.go:35] waiting up to 6m0s for node "addons-509957" to be "Ready" ...
	I0912 22:30:30.208877 1598521 node_ready.go:49] node "addons-509957" has status "Ready":"True"
	I0912 22:30:30.208969 1598521 node_ready.go:38] duration metric: took 7.270126ms for node "addons-509957" to be "Ready" ...
	I0912 22:30:30.209017 1598521 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:30:30.232944 1598521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace to be "Ready" ...
	I0912 22:30:30.448679 1598521 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 22:30:30.448753 1598521 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 22:30:30.525719 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 22:30:30.578165 1598521 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 22:30:30.578190 1598521 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 22:30:30.620880 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 22:30:30.655401 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 22:30:30.687564 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 22:30:30.694158 1598521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 22:30:30.694180 1598521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 22:30:30.702000 1598521 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 22:30:30.702027 1598521 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 22:30:30.708889 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 22:30:30.736196 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:30:30.755379 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 22:30:30.772939 1598521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 22:30:30.773012 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 22:30:30.783054 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 22:30:30.783133 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 22:30:30.794101 1598521 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 22:30:30.794169 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 22:30:30.869300 1598521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 22:30:30.869372 1598521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 22:30:30.942096 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 22:30:30.942169 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 22:30:30.988185 1598521 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 22:30:30.988257 1598521 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 22:30:31.038440 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 22:30:31.050563 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 22:30:31.134141 1598521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 22:30:31.134214 1598521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 22:30:31.184803 1598521 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 22:30:31.184872 1598521 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 22:30:31.187501 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 22:30:31.187570 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 22:30:31.188815 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 22:30:31.188880 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 22:30:31.285785 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 22:30:31.285861 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 22:30:31.296456 1598521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 22:30:31.296524 1598521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 22:30:31.332473 1598521 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 22:30:31.332543 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 22:30:31.362809 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 22:30:31.393465 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 22:30:31.393537 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 22:30:31.406772 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 22:30:31.406843 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 22:30:31.433939 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 22:30:31.433966 1598521 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 22:30:31.466632 1598521 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.472102425s)
	I0912 22:30:31.466663 1598521 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 22:30:31.569908 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 22:30:31.722489 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 22:30:31.722568 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 22:30:31.740488 1598521 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 22:30:31.740512 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 22:30:31.757642 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 22:30:31.757669 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 22:30:31.970704 1598521 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-509957" context rescaled to 1 replicas
	I0912 22:30:31.971815 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 22:30:31.971837 1598521 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 22:30:31.986952 1598521 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 22:30:31.986979 1598521 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 22:30:31.999884 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 22:30:32.240284 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:32.322439 1598521 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 22:30:32.322463 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 22:30:32.343014 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 22:30:32.343039 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 22:30:32.404871 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 22:30:32.496885 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 22:30:32.496913 1598521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 22:30:32.753032 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 22:30:32.753058 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 22:30:33.086112 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 22:30:33.086137 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 22:30:33.565703 1598521 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 22:30:33.565777 1598521 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 22:30:33.911519 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 22:30:34.038670 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.383242884s)
	I0912 22:30:34.038764 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.417704086s)
	I0912 22:30:34.038813 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.513069385s)
	I0912 22:30:34.739281 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:36.742935 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:36.806376 1598521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 22:30:36.806525 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:36.835457 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:37.121090 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.433492525s)
	I0912 22:30:37.121124 1598521 addons.go:475] Verifying addon ingress=true in "addons-509957"
	I0912 22:30:37.121281 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.412362899s)
	I0912 22:30:37.121593 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.385329402s)
	I0912 22:30:37.127412 1598521 out.go:177] * Verifying ingress addon...
	I0912 22:30:37.130802 1598521 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 22:30:37.138952 1598521 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 22:30:37.138981 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:37.213970 1598521 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 22:30:37.352864 1598521 addons.go:234] Setting addon gcp-auth=true in "addons-509957"
	I0912 22:30:37.352922 1598521 host.go:66] Checking if "addons-509957" exists ...
	I0912 22:30:37.353392 1598521 cli_runner.go:164] Run: docker container inspect addons-509957 --format={{.State.Status}}
	I0912 22:30:37.388128 1598521 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 22:30:37.388192 1598521 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-509957
	I0912 22:30:37.411036 1598521 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/addons-509957/id_rsa Username:docker}
	I0912 22:30:37.636171 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:38.179281 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:38.658976 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:38.753203 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:38.990978 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.940337744s)
	I0912 22:30:38.991062 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.628186628s)
	I0912 22:30:38.991074 1598521 addons.go:475] Verifying addon metrics-server=true in "addons-509957"
	I0912 22:30:38.991123 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.421142903s)
	I0912 22:30:38.990910 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.952384425s)
	I0912 22:30:38.991304 1598521 addons.go:475] Verifying addon registry=true in "addons-509957"
	I0912 22:30:38.991593 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.236134845s)
	I0912 22:30:38.991788 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.991878987s)
	W0912 22:30:38.991824 1598521 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 22:30:38.991852 1598521 retry.go:31] will retry after 138.929316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 22:30:38.991954 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.587045287s)
	I0912 22:30:38.993348 1598521 out.go:177] * Verifying registry addon...
	I0912 22:30:38.993354 1598521 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-509957 service yakd-dashboard -n yakd-dashboard
	
	I0912 22:30:38.996658 1598521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 22:30:39.013874 1598521 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 22:30:39.013907 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:39.131770 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 22:30:39.150905 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:39.501630 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:39.636342 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:39.846880 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.93526467s)
	I0912 22:30:39.846920 1598521 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-509957"
	I0912 22:30:39.847133 1598521 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.458978259s)
	I0912 22:30:39.850041 1598521 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 22:30:39.850100 1598521 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 22:30:39.853626 1598521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 22:30:39.855661 1598521 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 22:30:39.857727 1598521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 22:30:39.857762 1598521 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 22:30:39.865850 1598521 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 22:30:39.865881 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:39.953905 1598521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 22:30:39.953930 1598521 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 22:30:40.022674 1598521 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 22:30:40.022701 1598521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 22:30:40.027664 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:40.155300 1598521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 22:30:40.157023 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:40.358962 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:40.500916 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:40.635335 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:40.859367 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:40.939778 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.807956753s)
	I0912 22:30:41.000156 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:41.145367 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:41.224375 1598521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.069034134s)
	I0912 22:30:41.229009 1598521 addons.go:475] Verifying addon gcp-auth=true in "addons-509957"
	I0912 22:30:41.232389 1598521 out.go:177] * Verifying gcp-auth addon...
	I0912 22:30:41.235070 1598521 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 22:30:41.244356 1598521 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 22:30:41.249810 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:41.359086 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:41.501495 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:41.636617 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:41.858879 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:42.000448 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:42.149922 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:42.359332 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:42.501732 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:42.634670 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:42.860910 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:43.000716 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:43.134733 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:43.366216 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:43.502249 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:43.635741 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:43.742033 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:43.860774 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:44.003165 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:44.137397 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:44.359482 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:44.500925 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:44.635776 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:44.858582 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:45.000961 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:45.140153 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:45.361397 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:45.502821 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:45.635177 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:45.859842 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:46.004493 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:46.136179 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:46.242586 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:46.362034 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:46.500908 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:46.661226 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:46.858530 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:47.001117 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:47.137016 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:47.358458 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:47.501517 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:47.635756 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:47.858367 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:48.000998 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:48.136101 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:48.358910 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:48.500711 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:48.634927 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:48.741143 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:48.859138 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:49.001253 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:49.136489 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:49.358203 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:49.501334 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:49.637529 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:49.864262 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:50.001085 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:50.136880 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:50.358744 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:50.501743 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:50.634892 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:50.742273 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:50.859046 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:51.000331 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:51.135637 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:51.359641 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:51.501482 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:51.635780 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:51.857788 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:52.000400 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:52.134802 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:52.358743 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:52.500487 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:52.635940 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:52.858644 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:53.000028 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:53.135462 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:53.239863 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:53.359454 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:53.501364 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:53.635004 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:53.858730 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:54.001323 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:54.136347 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:54.358083 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:54.501375 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:54.635773 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:54.858554 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:55.000467 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:55.135896 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:55.245557 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:55.358992 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:55.501108 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:55.635572 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:55.858494 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:56.000671 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:56.136767 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:56.358698 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:56.500474 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:56.636101 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:56.858728 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:57.000480 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:57.136220 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:57.358343 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:57.501052 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:57.635184 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:57.739578 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:30:57.859149 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:58.000037 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:58.135973 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:58.358435 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:58.501017 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:58.639966 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:58.858067 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:59.007764 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:59.135979 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:59.359786 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:30:59.500365 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:30:59.641600 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:30:59.859177 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:00.000636 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:00.136969 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:00.262879 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:00.360785 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:00.500844 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:00.635089 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:00.858276 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:01.001099 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:01.135668 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:01.359635 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:01.500807 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:01.634992 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:01.858913 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:02.001294 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:02.135472 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:02.358436 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:02.501241 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:02.636293 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:02.739983 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:02.857894 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:03.000253 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:03.135958 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:03.359684 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:03.500880 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:03.635139 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:03.858239 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:04.005458 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:04.134831 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:04.358331 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:04.501105 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:04.635540 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:04.858487 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:05.000558 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:05.135250 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:05.239614 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:05.359912 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:05.500517 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:05.635177 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:05.858896 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:06.000909 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:06.135316 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:06.359627 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:06.501261 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:06.636746 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:06.859181 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:07.000614 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:07.135886 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:07.240888 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:07.359361 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:07.501143 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:07.635928 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:07.858796 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:08.000866 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:08.135016 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:08.360016 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:08.500688 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:08.635046 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:08.859066 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:09.002575 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:09.134773 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:09.243084 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:09.357956 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:09.500954 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:09.634847 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:09.858909 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:10.000723 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:10.135175 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:10.358693 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:10.500460 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:10.635874 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:10.858546 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:11.000881 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:11.134850 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:11.360917 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:11.501494 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:11.635996 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:11.739605 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:11.858648 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:12.000838 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:12.136161 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:12.359307 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:12.501124 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:12.638441 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:12.858731 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:13.000353 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:13.136822 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:13.359745 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:13.501508 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:13.635986 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:13.742031 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:13.859482 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:14.000921 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:14.135600 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:14.358039 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:14.501577 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:14.637206 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:14.859524 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:15.000210 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:15.136020 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:15.359031 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:15.501593 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:15.635961 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:15.859338 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:16.000645 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:16.135059 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:16.239402 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:16.358908 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:16.500083 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:16.637015 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:16.858608 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:17.001582 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:17.136362 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:17.360287 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:17.500813 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:17.635458 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:17.859846 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:18.002196 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:18.135834 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:18.240424 1598521 pod_ready.go:103] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"False"
	I0912 22:31:18.359656 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:18.500847 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:18.636029 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:18.860072 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:19.000885 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:19.135548 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:19.362755 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:19.500337 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:19.636292 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:19.745831 1598521 pod_ready.go:93] pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:19.745858 1598521 pod_ready.go:82] duration metric: took 49.512831154s for pod "coredns-7c65d6cfc9-fk594" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.745869 1598521 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r87t7" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.754179 1598521 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-r87t7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-r87t7" not found
	I0912 22:31:19.754208 1598521 pod_ready.go:82] duration metric: took 8.331595ms for pod "coredns-7c65d6cfc9-r87t7" in "kube-system" namespace to be "Ready" ...
	E0912 22:31:19.754220 1598521 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-r87t7" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-r87t7" not found
	I0912 22:31:19.754228 1598521 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.765556 1598521 pod_ready.go:93] pod "etcd-addons-509957" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:19.765583 1598521 pod_ready.go:82] duration metric: took 11.347665ms for pod "etcd-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.765599 1598521 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.786373 1598521 pod_ready.go:93] pod "kube-apiserver-addons-509957" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:19.786399 1598521 pod_ready.go:82] duration metric: took 20.791896ms for pod "kube-apiserver-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.786411 1598521 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.797270 1598521 pod_ready.go:93] pod "kube-controller-manager-addons-509957" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:19.797297 1598521 pod_ready.go:82] duration metric: took 10.877686ms for pod "kube-controller-manager-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.797310 1598521 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cdr7c" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.860027 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:19.936937 1598521 pod_ready.go:93] pod "kube-proxy-cdr7c" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:19.936974 1598521 pod_ready.go:82] duration metric: took 139.656896ms for pod "kube-proxy-cdr7c" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:19.936986 1598521 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:20.001045 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:20.135404 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:20.341162 1598521 pod_ready.go:93] pod "kube-scheduler-addons-509957" in "kube-system" namespace has status "Ready":"True"
	I0912 22:31:20.341187 1598521 pod_ready.go:82] duration metric: took 404.1928ms for pod "kube-scheduler-addons-509957" in "kube-system" namespace to be "Ready" ...
	I0912 22:31:20.341198 1598521 pod_ready.go:39] duration metric: took 50.132123795s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:31:20.341212 1598521 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:31:20.341275 1598521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:31:20.359277 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:20.365145 1598521 api_server.go:72] duration metric: took 51.124278665s to wait for apiserver process to appear ...
	I0912 22:31:20.365167 1598521 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:31:20.365187 1598521 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 22:31:20.372938 1598521 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 22:31:20.373891 1598521 api_server.go:141] control plane version: v1.31.1
	I0912 22:31:20.373917 1598521 api_server.go:131] duration metric: took 8.742219ms to wait for apiserver health ...
	I0912 22:31:20.373929 1598521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:31:20.500591 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:20.550402 1598521 system_pods.go:59] 18 kube-system pods found
	I0912 22:31:20.550439 1598521 system_pods.go:61] "coredns-7c65d6cfc9-fk594" [3fdd49f3-1b72-490c-b492-297fe4b73b5e] Running
	I0912 22:31:20.550449 1598521 system_pods.go:61] "csi-hostpath-attacher-0" [e8b6cc0a-5f1a-456a-ae2e-5cdbdf111b45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 22:31:20.550459 1598521 system_pods.go:61] "csi-hostpath-resizer-0" [7ccc8f3f-2cbd-451e-910d-eb42aab81f0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 22:31:20.550467 1598521 system_pods.go:61] "csi-hostpathplugin-f2f6f" [2d5cacd9-38f9-4231-940b-495cf4f916d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 22:31:20.550472 1598521 system_pods.go:61] "etcd-addons-509957" [e3669b92-d093-4e95-b35c-51c29b62d0b6] Running
	I0912 22:31:20.550481 1598521 system_pods.go:61] "kindnet-glgtc" [bd831a50-58c3-4634-bdb8-9f83ab4a1384] Running
	I0912 22:31:20.550485 1598521 system_pods.go:61] "kube-apiserver-addons-509957" [241b6923-37c6-4101-8789-5ec72ad41729] Running
	I0912 22:31:20.550495 1598521 system_pods.go:61] "kube-controller-manager-addons-509957" [aaa86d46-2a15-4374-85ab-fe66a80de6b6] Running
	I0912 22:31:20.550499 1598521 system_pods.go:61] "kube-ingress-dns-minikube" [8645d094-a227-47a1-be4f-abd94f998b77] Running
	I0912 22:31:20.550503 1598521 system_pods.go:61] "kube-proxy-cdr7c" [8980b27b-275c-403d-ba90-1f5b3cedff3b] Running
	I0912 22:31:20.550508 1598521 system_pods.go:61] "kube-scheduler-addons-509957" [9face47e-3159-466f-be93-072b34792a3a] Running
	I0912 22:31:20.550518 1598521 system_pods.go:61] "metrics-server-84c5f94fbc-g4znh" [839c9ade-c469-4f4d-8fb0-9d230575561b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 22:31:20.550526 1598521 system_pods.go:61] "nvidia-device-plugin-daemonset-c7dzm" [43c65b34-f0d5-4bfd-9348-e239e413a3cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 22:31:20.550534 1598521 system_pods.go:61] "registry-66c9cd494c-qlq9q" [31441b74-a88f-48b1-bd9f-37c0b02ea6a0] Running
	I0912 22:31:20.550540 1598521 system_pods.go:61] "registry-proxy-q6ksf" [c1a39170-c345-46f4-845f-d000efef9490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 22:31:20.550544 1598521 system_pods.go:61] "snapshot-controller-56fcc65765-r68ft" [a3dc0db9-67a4-4f6a-9d2c-88b2285c918b] Running
	I0912 22:31:20.550554 1598521 system_pods.go:61] "snapshot-controller-56fcc65765-wsmd8" [886b09a7-1ee3-4229-9ada-6e7f3e73a912] Running
	I0912 22:31:20.550558 1598521 system_pods.go:61] "storage-provisioner" [54f492b2-9179-44b5-8880-8e9911ac3e8d] Running
	I0912 22:31:20.550563 1598521 system_pods.go:74] duration metric: took 176.628513ms to wait for pod list to return data ...
	I0912 22:31:20.550571 1598521 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:31:20.635688 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:20.738345 1598521 default_sa.go:45] found service account: "default"
	I0912 22:31:20.738375 1598521 default_sa.go:55] duration metric: took 187.795478ms for default service account to be created ...
	I0912 22:31:20.738385 1598521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:31:20.859800 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:20.961034 1598521 system_pods.go:86] 18 kube-system pods found
	I0912 22:31:20.961109 1598521 system_pods.go:89] "coredns-7c65d6cfc9-fk594" [3fdd49f3-1b72-490c-b492-297fe4b73b5e] Running
	I0912 22:31:20.961136 1598521 system_pods.go:89] "csi-hostpath-attacher-0" [e8b6cc0a-5f1a-456a-ae2e-5cdbdf111b45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 22:31:20.961162 1598521 system_pods.go:89] "csi-hostpath-resizer-0" [7ccc8f3f-2cbd-451e-910d-eb42aab81f0f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 22:31:20.961200 1598521 system_pods.go:89] "csi-hostpathplugin-f2f6f" [2d5cacd9-38f9-4231-940b-495cf4f916d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 22:31:20.961219 1598521 system_pods.go:89] "etcd-addons-509957" [e3669b92-d093-4e95-b35c-51c29b62d0b6] Running
	I0912 22:31:20.961242 1598521 system_pods.go:89] "kindnet-glgtc" [bd831a50-58c3-4634-bdb8-9f83ab4a1384] Running
	I0912 22:31:20.961274 1598521 system_pods.go:89] "kube-apiserver-addons-509957" [241b6923-37c6-4101-8789-5ec72ad41729] Running
	I0912 22:31:20.961295 1598521 system_pods.go:89] "kube-controller-manager-addons-509957" [aaa86d46-2a15-4374-85ab-fe66a80de6b6] Running
	I0912 22:31:20.961316 1598521 system_pods.go:89] "kube-ingress-dns-minikube" [8645d094-a227-47a1-be4f-abd94f998b77] Running
	I0912 22:31:20.961337 1598521 system_pods.go:89] "kube-proxy-cdr7c" [8980b27b-275c-403d-ba90-1f5b3cedff3b] Running
	I0912 22:31:20.961372 1598521 system_pods.go:89] "kube-scheduler-addons-509957" [9face47e-3159-466f-be93-072b34792a3a] Running
	I0912 22:31:20.961397 1598521 system_pods.go:89] "metrics-server-84c5f94fbc-g4znh" [839c9ade-c469-4f4d-8fb0-9d230575561b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 22:31:20.961420 1598521 system_pods.go:89] "nvidia-device-plugin-daemonset-c7dzm" [43c65b34-f0d5-4bfd-9348-e239e413a3cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0912 22:31:20.961443 1598521 system_pods.go:89] "registry-66c9cd494c-qlq9q" [31441b74-a88f-48b1-bd9f-37c0b02ea6a0] Running
	I0912 22:31:20.961481 1598521 system_pods.go:89] "registry-proxy-q6ksf" [c1a39170-c345-46f4-845f-d000efef9490] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 22:31:20.961508 1598521 system_pods.go:89] "snapshot-controller-56fcc65765-r68ft" [a3dc0db9-67a4-4f6a-9d2c-88b2285c918b] Running
	I0912 22:31:20.961531 1598521 system_pods.go:89] "snapshot-controller-56fcc65765-wsmd8" [886b09a7-1ee3-4229-9ada-6e7f3e73a912] Running
	I0912 22:31:20.961553 1598521 system_pods.go:89] "storage-provisioner" [54f492b2-9179-44b5-8880-8e9911ac3e8d] Running
	I0912 22:31:20.961590 1598521 system_pods.go:126] duration metric: took 223.197632ms to wait for k8s-apps to be running ...
	I0912 22:31:20.961615 1598521 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:31:20.961704 1598521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:31:20.975568 1598521 system_svc.go:56] duration metric: took 13.942847ms WaitForService to wait for kubelet
	I0912 22:31:20.975598 1598521 kubeadm.go:582] duration metric: took 51.734737207s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:31:20.975620 1598521 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:31:21.005371 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:21.136955 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:21.145252 1598521 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0912 22:31:21.145286 1598521 node_conditions.go:123] node cpu capacity is 2
	I0912 22:31:21.145300 1598521 node_conditions.go:105] duration metric: took 169.67471ms to run NodePressure ...
	I0912 22:31:21.145348 1598521 start.go:241] waiting for startup goroutines ...
	I0912 22:31:21.363108 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:21.501300 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:21.638084 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:21.863122 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:22.001234 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:22.136530 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:22.360117 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:22.518718 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:22.635879 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:22.895454 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:23.003299 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:23.136847 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:23.359837 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:23.500941 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:23.635833 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:23.858092 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:24.001003 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:24.136087 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:24.359019 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:24.501496 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:24.635439 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:24.858427 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:25.000651 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:25.135033 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:25.358931 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:25.500847 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 22:31:25.637704 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:25.867074 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:26.004990 1598521 kapi.go:107] duration metric: took 47.008328344s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 22:31:26.134772 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:26.358732 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:26.635637 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:26.859462 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:27.136235 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:27.358432 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:27.637291 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:27.860193 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:28.136124 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:28.358817 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:28.634866 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:28.858995 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:29.135582 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:29.360040 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:29.635001 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:29.858329 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:30.135985 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:30.358789 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:30.636200 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:30.860449 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:31.135375 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:31.359903 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:31.635136 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:31.859576 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:32.135866 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:32.361863 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:32.635225 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:32.858930 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:33.136463 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:33.359334 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:33.636153 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:33.857932 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:34.135574 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:34.358201 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:34.636178 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:34.858805 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:35.135428 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:35.361888 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:35.635522 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:35.859383 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:36.136040 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:36.358807 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:36.634916 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:36.859182 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:37.140011 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:37.358436 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:37.635577 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:37.863004 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:38.136572 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:38.359921 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:38.642909 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:38.858179 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:39.135381 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:39.359347 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:39.636178 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:39.859058 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:40.136516 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:40.359114 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:40.636564 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:40.859155 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:41.136045 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:41.358362 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:41.636212 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:41.860027 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:42.136002 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:42.359646 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:42.635745 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:42.860160 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:43.135297 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:43.359049 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:43.636090 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:43.858520 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:44.135556 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:44.369400 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:44.674941 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:44.858672 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:45.142297 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:45.361624 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:45.640732 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:45.858911 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:46.135868 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:46.359240 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:46.638127 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:46.861627 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:47.135933 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:47.359358 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:47.636985 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:47.862297 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:48.136245 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:48.359640 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:48.640077 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:48.858847 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:49.136005 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:49.358923 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:49.637198 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:49.863001 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:50.136883 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:50.358541 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:50.636186 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:50.859506 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:51.144968 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:51.359491 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:51.635460 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:51.858821 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:52.135166 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:52.358686 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:52.635395 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:52.859123 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:53.138639 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:53.358167 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:53.635047 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:53.859978 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:54.137054 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:54.358869 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:54.635167 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:54.859062 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 22:31:55.135640 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:55.358372 1598521 kapi.go:107] duration metric: took 1m15.504744787s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 22:31:55.635393 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:56.135950 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:56.635937 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:57.134748 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:57.635156 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:58.135761 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:58.635586 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:59.135520 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:31:59.635479 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:00.161121 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:00.636016 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:01.135930 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:01.634773 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:02.136160 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:02.634645 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:03.140543 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:03.635940 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:04.135473 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:04.635072 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:05.134985 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:05.635093 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:06.135252 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:06.634778 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:07.135889 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:07.635598 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:08.136183 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:08.634798 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:09.135652 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:09.634860 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:10.135937 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:10.635914 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:11.135187 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:11.635345 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:12.136194 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:12.635635 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:13.135773 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:13.634829 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:14.135841 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:14.635682 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:15.137383 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:15.636764 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:16.136117 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:16.635201 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:17.136344 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:17.635680 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:18.135982 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:18.635871 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:19.135636 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:19.634996 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:20.135800 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:20.635961 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:21.136484 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:21.635183 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:22.135744 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:22.636011 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:23.134607 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:23.635295 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:24.135808 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:24.635141 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:25.135697 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:25.636118 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:26.135630 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:26.635919 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:27.136222 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:27.635669 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:28.136235 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:28.635371 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:29.135304 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:29.636121 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:30.137272 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:30.635793 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:31.136551 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:31.635450 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:32.135389 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:32.635884 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:33.135074 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:33.635388 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:34.135826 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:34.636030 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:35.136012 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:35.635173 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:36.135376 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:36.635100 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:37.136460 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:37.635737 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:38.135906 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:38.634753 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:39.134772 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:39.634606 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:40.135428 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:40.635405 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:41.135834 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:41.635896 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:42.136833 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:42.635872 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:43.135769 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:43.634537 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:44.136235 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:44.634850 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:45.145270 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:45.636921 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:46.138288 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:46.635806 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:47.134803 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:47.635149 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:48.135626 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:48.635830 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:49.135310 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:49.635931 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:50.135972 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:50.635344 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:51.135984 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:51.636578 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:52.137711 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:52.634699 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:53.136321 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:53.635607 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:54.140504 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:54.636022 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:55.138657 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:55.636049 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:56.135483 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:56.635570 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:57.135662 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:57.634949 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:58.135168 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:58.635459 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:59.140659 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:32:59.635943 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:33:00.137121 1598521 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 22:33:00.635244 1598521 kapi.go:107] duration metric: took 2m23.504439957s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 22:33:25.238615 1598521 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 22:33:25.238641 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:25.739066 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:26.238561 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:26.738190 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:27.239348 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:27.738968 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:28.239334 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:28.738262 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:29.239395 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:29.738846 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:30.239599 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:30.739056 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:31.239309 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:31.739558 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:32.238555 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:32.739814 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:33.238741 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:33.738971 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:34.238718 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:34.738829 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:35.238332 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:35.739193 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:36.239149 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:36.738770 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:37.238394 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:37.738913 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:38.239372 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:38.739385 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:39.239386 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:39.739769 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:40.244599 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:40.740478 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:41.239487 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:41.738810 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:42.238864 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:42.739213 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:43.253447 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:43.739478 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:44.241116 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:44.739558 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:45.243642 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:45.739157 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:46.239425 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:46.738236 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:47.238706 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:47.738666 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:48.238909 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:48.739096 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:49.238977 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:49.739278 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:50.238461 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:50.739935 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:51.238846 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:51.738075 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:52.238710 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:52.738558 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:53.238733 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:53.738520 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:54.239511 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:54.738477 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:55.238396 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:55.739000 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:56.239540 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:56.739412 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:57.239540 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:57.739181 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:58.238732 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:58.738973 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:59.238587 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:33:59.738251 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:34:00.261064 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:34:00.738779 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:34:01.239154 1598521 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 22:34:01.741316 1598521 kapi.go:107] duration metric: took 3m20.506244444s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 22:34:01.743239 1598521 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-509957 cluster.
	I0912 22:34:01.745117 1598521 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 22:34:01.747317 1598521 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 22:34:01.749755 1598521 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, default-storageclass, ingress-dns, metrics-server, volcano, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 22:34:01.751743 1598521 addons.go:510] duration metric: took 3m32.510575465s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner-rancher storage-provisioner default-storageclass ingress-dns metrics-server volcano inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 22:34:01.751804 1598521 start.go:246] waiting for cluster config update ...
	I0912 22:34:01.751825 1598521 start.go:255] writing updated cluster config ...
	I0912 22:34:01.752131 1598521 ssh_runner.go:195] Run: rm -f paused
	I0912 22:34:02.135879 1598521 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 22:34:02.137930 1598521 out.go:177] * Done! kubectl is now configured to use "addons-509957" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	804a46c3de8bb       4f725bf50aaa5       37 seconds ago      Exited              gadget                                   6                   4e0cd0ffb6386       gadget-57k9z
	6f38c1b662a5c       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   b0d44b40fb614       gcp-auth-89d5ffd79-gz52n
	da4ed80f760c2       289a818c8d9c5       4 minutes ago       Running             controller                               0                   b9a43117dc625       ingress-nginx-controller-bc57996ff-xsxjd
	7b07f0d1d6990       8b46b1cd48760       4 minutes ago       Running             admission                                0                   ee7831cf3d2fb       volcano-admission-77d7d48b68-pdtvn
	a16a29c6007f7       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        1                   de6778b57b8e3       volcano-scheduler-576bc46687-kl2wz
	7b1e67339233a       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	e215ffaea793d       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	b926a9af3a920       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	9794ec4729379       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	94a4547f0fdae       420193b27261a       5 minutes ago       Exited              patch                                    2                   3358fdbc992a3       ingress-nginx-admission-patch-r9v74
	29b4b9aabcb47       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	d1907f38c3553       420193b27261a       5 minutes ago       Exited              create                                   0                   4078166fd83d2       ingress-nginx-admission-create-4hzrl
	d9cffd8a9e77f       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   c216aade2e363       volcano-controllers-56675bb4d5-jsjzz
	7a78fc0d7428b       d9c7ad4c226bf       5 minutes ago       Exited              volcano-scheduler                        0                   de6778b57b8e3       volcano-scheduler-576bc46687-kl2wz
	5edb2005f897e       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   89bfc41608529       nvidia-device-plugin-daemonset-c7dzm
	92a1c009065e9       77bdba588b953       5 minutes ago       Running             yakd                                     0                   368142dd32acd       yakd-dashboard-67d98fc6b-qql9d
	91781bd5c4c89       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   378c5dd03e01a       csi-hostpathplugin-f2f6f
	dca61b23a2834       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   baf35ce914d48       csi-hostpath-attacher-0
	35128e83fd8a1       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   aef0b2dde997c       csi-hostpath-resizer-0
	4af0ede525892       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   73e5c29115c54       registry-proxy-q6ksf
	7369af5c6c13a       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   86452bab14e96       metrics-server-84c5f94fbc-g4znh
	beb97587e8ea0       2f6c962e7b831       6 minutes ago       Running             coredns                                  0                   312c463b84793       coredns-7c65d6cfc9-fk594
	cc5bd572b8751       8be4bcf8ec607       6 minutes ago       Running             cloud-spanner-emulator                   0                   766e143c5a030       cloud-spanner-emulator-769b77f747-rnhdg
	f2ec13a2f9053       4d1e5c3e97420       6 minutes ago       Running             volume-snapshot-controller               0                   de6030d988d93       snapshot-controller-56fcc65765-wsmd8
	725ec13e36a95       4d1e5c3e97420       6 minutes ago       Running             volume-snapshot-controller               0                   48b0e76f08c5e       snapshot-controller-56fcc65765-r68ft
	26f31a34ea22b       c9cf76bb104e1       6 minutes ago       Running             registry                                 0                   7fc07d423f4db       registry-66c9cd494c-qlq9q
	26a849e1eccc7       7ce2150c8929b       6 minutes ago       Running             local-path-provisioner                   0                   7de251d395723       local-path-provisioner-86d989889c-ddpmc
	4db359d175f52       35508c2f890c4       6 minutes ago       Running             minikube-ingress-dns                     0                   893f175f92e8c       kube-ingress-dns-minikube
	b4b6ae677f54c       ba04bb24b9575       6 minutes ago       Running             storage-provisioner                      0                   e88630488c1af       storage-provisioner
	09868105d8357       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   2f3b9576eb756       kindnet-glgtc
	6265c141f0351       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   8fb1ab53c0300       kube-proxy-cdr7c
	aaf81448b43a1       7f8aa378bb47d       7 minutes ago       Running             kube-scheduler                           0                   b878dfa5a965b       kube-scheduler-addons-509957
	e9cfe2e00eaea       279f381cb3736       7 minutes ago       Running             kube-controller-manager                  0                   13e12ec65b01c       kube-controller-manager-addons-509957
	48efaee760583       d3f53a98c0a9d       7 minutes ago       Running             kube-apiserver                           0                   95adddae30548       kube-apiserver-addons-509957
	8d2c5686390b5       27e3830e14027       7 minutes ago       Running             etcd                                     0                   8dca863766c73       etcd-addons-509957
	
	
	==> containerd <==
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.479606209Z" level=info msg="StopPodSandbox for \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\""
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.488803729Z" level=info msg="TearDown network for sandbox \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\" successfully"
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.488972269Z" level=info msg="StopPodSandbox for \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\" returns successfully"
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.489578889Z" level=info msg="RemovePodSandbox for \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\""
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.489700406Z" level=info msg="Forcibly stopping sandbox \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\""
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.505218534Z" level=info msg="TearDown network for sandbox \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\" successfully"
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.515494113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 12 22:34:24 addons-509957 containerd[817]: time="2024-09-12T22:34:24.515629177Z" level=info msg="RemovePodSandbox \"e34350d096453916c1e159727f28fbe624828de433a262726ff073be1a7d4c3d\" returns successfully"
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.349602295Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.495605473Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.497429146Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.501622960Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 151.967602ms"
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.501669737Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.504395265Z" level=info msg="CreateContainer within sandbox \"4e0cd0ffb63863224d5f801085671d7e684d4de170196c2a084ea8286df0625c\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.528163355Z" level=info msg="CreateContainer within sandbox \"4e0cd0ffb63863224d5f801085671d7e684d4de170196c2a084ea8286df0625c\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\""
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.534985130Z" level=info msg="StartContainer for \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\""
	Sep 12 22:36:43 addons-509957 containerd[817]: time="2024-09-12T22:36:43.599209633Z" level=info msg="StartContainer for \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\" returns successfully"
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.073857947Z" level=error msg="ExecSync for \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\" failed" error="failed to exec in container: failed to start exec \"6445965f4a014c097243f1dff20ac9b19c257628a477accbe21ce2bcb74b1f22\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.119509490Z" level=error msg="ExecSync for \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\" failed" error="failed to exec in container: failed to start exec \"3fee0910563b69afadada408a7b3cb7d57538a23ab28c8deda74bb120ccfb118\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.135642630Z" level=error msg="ExecSync for \"804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d\" failed" error="failed to exec in container: failed to start exec \"4369938f5a54287b6705a1f2ce244a58909b01d4205004f70d8c5fa5f36cafac\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.323854969Z" level=info msg="shim disconnected" id=804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d namespace=k8s.io
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.323920569Z" level=warning msg="cleaning up after shim disconnected" id=804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d namespace=k8s.io
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.323931695Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.856970118Z" level=info msg="RemoveContainer for \"ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef\""
	Sep 12 22:36:45 addons-509957 containerd[817]: time="2024-09-12T22:36:45.864388962Z" level=info msg="RemoveContainer for \"ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef\" returns successfully"
	
	
	==> coredns [beb97587e8ea0bacf6cc27e05e3be539ce3fedbcc85464b3fe7b194fa4e912e6] <==
	[INFO] 10.244.0.7:53779 - 41980 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070942s
	[INFO] 10.244.0.7:59840 - 59524 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00250719s
	[INFO] 10.244.0.7:59840 - 19592 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001995783s
	[INFO] 10.244.0.7:48012 - 13858 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090371s
	[INFO] 10.244.0.7:48012 - 54051 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000079105s
	[INFO] 10.244.0.7:41989 - 53804 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000104976s
	[INFO] 10.244.0.7:41989 - 6689 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040255s
	[INFO] 10.244.0.7:48219 - 33123 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058142s
	[INFO] 10.244.0.7:48219 - 21857 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00004457s
	[INFO] 10.244.0.7:53909 - 22993 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060152s
	[INFO] 10.244.0.7:53909 - 25811 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043733s
	[INFO] 10.244.0.7:53572 - 35695 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001486985s
	[INFO] 10.244.0.7:53572 - 54369 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001845351s
	[INFO] 10.244.0.7:42228 - 3285 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065427s
	[INFO] 10.244.0.7:42228 - 54231 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000072845s
	[INFO] 10.244.0.24:39900 - 16381 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016091s
	[INFO] 10.244.0.24:43238 - 50026 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118531s
	[INFO] 10.244.0.24:56651 - 58300 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123085s
	[INFO] 10.244.0.24:49180 - 23211 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082806s
	[INFO] 10.244.0.24:59967 - 49229 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009879s
	[INFO] 10.244.0.24:60482 - 20889 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073649s
	[INFO] 10.244.0.24:43436 - 21305 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002503711s
	[INFO] 10.244.0.24:42492 - 38722 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002394222s
	[INFO] 10.244.0.24:37233 - 45786 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002413267s
	[INFO] 10.244.0.24:58634 - 23131 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002467429s
	
	
	==> describe nodes <==
	Name:               addons-509957
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-509957
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-509957
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T22_30_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-509957
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-509957"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 22:30:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-509957
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 22:37:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 22:34:29 +0000   Thu, 12 Sep 2024 22:30:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 22:34:29 +0000   Thu, 12 Sep 2024 22:30:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 22:34:29 +0000   Thu, 12 Sep 2024 22:30:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 22:34:29 +0000   Thu, 12 Sep 2024 22:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-509957
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 097683f075c846ffa4843046f255fe83
	  System UUID:                ed35675f-d9d7-45e3-9821-f8199a5947b8
	  Boot ID:                    df7282e8-9021-4c1b-a6eb-f0483f23e85d
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-rnhdg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  gadget                      gadget-57k9z                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
	  gcp-auth                    gcp-auth-89d5ffd79-gz52n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xsxjd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m44s
	  kube-system                 coredns-7c65d6cfc9-fk594                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m51s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 csi-hostpathplugin-f2f6f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 etcd-addons-509957                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m57s
	  kube-system                 kindnet-glgtc                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m51s
	  kube-system                 kube-apiserver-addons-509957                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-controller-manager-addons-509957       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 kube-proxy-cdr7c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-scheduler-addons-509957                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 metrics-server-84c5f94fbc-g4znh             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m46s
	  kube-system                 nvidia-device-plugin-daemonset-c7dzm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 registry-66c9cd494c-qlq9q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 registry-proxy-q6ksf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 snapshot-controller-56fcc65765-r68ft        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 snapshot-controller-56fcc65765-wsmd8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m46s
	  local-path-storage          local-path-provisioner-86d989889c-ddpmc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  volcano-system              volcano-admission-77d7d48b68-pdtvn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  volcano-system              volcano-controllers-56675bb4d5-jsjzz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  volcano-system              volcano-scheduler-576bc46687-kl2wz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-qql9d              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m50s  kube-proxy       
	  Normal   Starting                 6m56s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m56s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m56s  kubelet          Node addons-509957 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m56s  kubelet          Node addons-509957 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m56s  kubelet          Node addons-509957 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m52s  node-controller  Node addons-509957 event: Registered Node addons-509957 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [8d2c5686390b5d4bdb23b7ca12006643c57f00040463afa8b059bb42a6270353] <==
	{"level":"info","ts":"2024-09-12T22:30:18.202758Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-12T22:30:18.202961Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-12T22:30:18.202987Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-12T22:30:18.203069Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-12T22:30:18.203080Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-12T22:30:18.284659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-12T22:30:18.284703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-12T22:30:18.284729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-12T22:30:18.284746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T22:30:18.284753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T22:30:18.284763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-12T22:30:18.284771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T22:30:18.285592Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:30:18.286372Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-509957 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T22:30:18.286510Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:30:18.286807Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T22:30:18.286948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:30:18.287001Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:30:18.287019Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T22:30:18.287621Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:30:18.298607Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-12T22:30:18.301494Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T22:30:18.302118Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T22:30:18.302152Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T22:30:18.306801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [6f38c1b662a5c4bb277bbf1e8a8ebbdd51b5844e8b66e1c71dfa4883945169d7] <==
	2024/09/12 22:34:00 GCP Auth Webhook started!
	2024/09/12 22:34:18 Ready to marshal response ...
	2024/09/12 22:34:18 Ready to write response ...
	2024/09/12 22:34:18 Ready to marshal response ...
	2024/09/12 22:34:18 Ready to write response ...
	
	
	==> kernel <==
	 22:37:20 up  7:19,  0 users,  load average: 0.48, 1.13, 1.98
	Linux addons-509957 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [09868105d8357655cc42177ad5c56c278864f3c83e8e15ae773d8015b745c450] <==
	I0912 22:35:20.834689       1 main.go:299] handling current node
	I0912 22:35:30.828890       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:35:30.828926       1 main.go:299] handling current node
	I0912 22:35:40.835779       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:35:40.835812       1 main.go:299] handling current node
	I0912 22:35:50.828666       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:35:50.829085       1 main.go:299] handling current node
	I0912 22:36:00.828862       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:00.828900       1 main.go:299] handling current node
	I0912 22:36:10.831787       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:10.831849       1 main.go:299] handling current node
	I0912 22:36:20.834512       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:20.834552       1 main.go:299] handling current node
	I0912 22:36:30.828624       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:30.828658       1 main.go:299] handling current node
	I0912 22:36:40.834754       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:40.834971       1 main.go:299] handling current node
	I0912 22:36:50.830050       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:36:50.830092       1 main.go:299] handling current node
	I0912 22:37:00.836947       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:37:00.836983       1 main.go:299] handling current node
	I0912 22:37:10.828620       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:37:10.828769       1 main.go:299] handling current node
	I0912 22:37:20.836303       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0912 22:37:20.836337       1 main.go:299] handling current node
	
	
	==> kube-apiserver [48efaee76058340b578b7669603200bd53151213f081538dad01a958e0e57be4] <==
	W0912 22:32:44.262710       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.132.78:443: connect: connection refused
	E0912 22:32:44.262747       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.132.78:443: connect: connection refused" logger="UnhandledError"
	W0912 22:32:44.264396       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:44.715216       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:45.521967       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:46.609428       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:47.687791       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:48.717185       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:49.802687       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:50.902333       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:51.999026       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:53.021631       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:54.045793       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:55.122394       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:56.129884       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:57.170484       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:32:58.253124       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.3.34:443: connect: connection refused
	W0912 22:33:25.095661       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.132.78:443: connect: connection refused
	E0912 22:33:25.095744       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.132.78:443: connect: connection refused" logger="UnhandledError"
	W0912 22:33:44.239216       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.132.78:443: connect: connection refused
	E0912 22:33:44.239257       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.132.78:443: connect: connection refused" logger="UnhandledError"
	W0912 22:33:44.270315       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.132.78:443: connect: connection refused
	E0912 22:33:44.270360       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.132.78:443: connect: connection refused" logger="UnhandledError"
	I0912 22:34:18.639899       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0912 22:34:18.672432       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [e9cfe2e00eaeabc3adef263a34551be291c912ca2e380de63a73a53a8e915fd0] <==
	I0912 22:33:44.283626       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:44.286715       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:44.297063       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:44.304264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:44.318974       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:45.341451       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:45.360582       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:46.342860       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:46.436735       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:47.344373       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:47.436295       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:47.445447       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:47.455582       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:47.461682       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0912 22:33:48.350051       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:48.360903       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:33:48.368217       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0912 22:34:01.458740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.151293ms"
	I0912 22:34:01.458989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="210.674µs"
	I0912 22:34:17.023906       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0912 22:34:17.058158       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0912 22:34:18.021991       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0912 22:34:18.051950       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0912 22:34:18.388338       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0912 22:34:29.104719       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-509957"
	
	
	==> kube-proxy [6265c141f0351fcf82b0087ec7278095c51ee0839b444a61ebcb42a1999ef48f] <==
	I0912 22:30:30.449982       1 server_linux.go:66] "Using iptables proxy"
	I0912 22:30:30.554804       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0912 22:30:30.554869       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 22:30:30.602413       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 22:30:30.602467       1 server_linux.go:169] "Using iptables Proxier"
	I0912 22:30:30.605347       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 22:30:30.605814       1 server.go:483] "Version info" version="v1.31.1"
	I0912 22:30:30.605827       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:30:30.614476       1 config.go:328] "Starting node config controller"
	I0912 22:30:30.614503       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 22:30:30.616573       1 config.go:199] "Starting service config controller"
	I0912 22:30:30.616590       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 22:30:30.616634       1 config.go:105] "Starting endpoint slice config controller"
	I0912 22:30:30.616640       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 22:30:30.715071       1 shared_informer.go:320] Caches are synced for node config
	I0912 22:30:30.719454       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 22:30:30.719514       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [aaf81448b43a165b1cd02ac1ce9ac07f7e7e202ab4c4c9973e09a1f7cefadf34] <==
	W0912 22:30:21.741211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 22:30:21.741397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:21.741723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 22:30:21.741859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:21.742704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 22:30:21.742796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:21.742883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:30:21.743039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:21.742901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:30:21.747661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.580003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:30:22.580050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.622716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:30:22.622894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.643662       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 22:30:22.643825       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 22:30:22.801218       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 22:30:22.801260       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.813418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:30:22.813465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.832645       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 22:30:22.832685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 22:30:22.856861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 22:30:22.857620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0912 22:30:24.535879       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 22:35:49 addons-509957 kubelet[1495]: I0912 22:35:49.347079    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:35:49 addons-509957 kubelet[1495]: E0912 22:35:49.347799    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:36:01 addons-509957 kubelet[1495]: I0912 22:36:01.347482    1495 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-qlq9q" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 22:36:01 addons-509957 kubelet[1495]: I0912 22:36:01.347534    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:36:01 addons-509957 kubelet[1495]: E0912 22:36:01.348437    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:36:15 addons-509957 kubelet[1495]: I0912 22:36:15.347087    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:36:15 addons-509957 kubelet[1495]: E0912 22:36:15.347322    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:36:30 addons-509957 kubelet[1495]: I0912 22:36:30.347315    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:36:30 addons-509957 kubelet[1495]: E0912 22:36:30.347527    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:36:39 addons-509957 kubelet[1495]: I0912 22:36:39.348036    1495 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-c7dzm" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 22:36:43 addons-509957 kubelet[1495]: I0912 22:36:43.347628    1495 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-q6ksf" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 22:36:43 addons-509957 kubelet[1495]: I0912 22:36:43.347862    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:36:45 addons-509957 kubelet[1495]: E0912 22:36:45.075171    1495 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"6445965f4a014c097243f1dff20ac9b19c257628a477accbe21ce2bcb74b1f22\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:36:45 addons-509957 kubelet[1495]: E0912 22:36:45.120344    1495 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"3fee0910563b69afadada408a7b3cb7d57538a23ab28c8deda74bb120ccfb118\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:36:45 addons-509957 kubelet[1495]: E0912 22:36:45.154271    1495 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"4369938f5a54287b6705a1f2ce244a58909b01d4205004f70d8c5fa5f36cafac\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 12 22:36:45 addons-509957 kubelet[1495]: I0912 22:36:45.853349    1495 scope.go:117] "RemoveContainer" containerID="ef9c05dea01b8d59088aacf2895b11e7f79373d0e6aa7e6476152808ceef41ef"
	Sep 12 22:36:45 addons-509957 kubelet[1495]: I0912 22:36:45.853762    1495 scope.go:117] "RemoveContainer" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d"
	Sep 12 22:36:45 addons-509957 kubelet[1495]: E0912 22:36:45.853929    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:36:47 addons-509957 kubelet[1495]: I0912 22:36:47.789281    1495 scope.go:117] "RemoveContainer" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d"
	Sep 12 22:36:47 addons-509957 kubelet[1495]: E0912 22:36:47.789489    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:37:00 addons-509957 kubelet[1495]: I0912 22:37:00.347694    1495 scope.go:117] "RemoveContainer" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d"
	Sep 12 22:37:00 addons-509957 kubelet[1495]: E0912 22:37:00.348579    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	Sep 12 22:37:06 addons-509957 kubelet[1495]: I0912 22:37:06.347409    1495 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-qlq9q" secret="" err="secret \"gcp-auth\" not found"
	Sep 12 22:37:13 addons-509957 kubelet[1495]: I0912 22:37:13.347869    1495 scope.go:117] "RemoveContainer" containerID="804a46c3de8bb4f31d4bacaca4802b0cfbd1fb620d604694e17507d82ac5998d"
	Sep 12 22:37:13 addons-509957 kubelet[1495]: E0912 22:37:13.348092    1495 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-57k9z_gadget(431a5873-4f16-4d23-a2a8-0c987ea08142)\"" pod="gadget/gadget-57k9z" podUID="431a5873-4f16-4d23-a2a8-0c987ea08142"
	
	
	==> storage-provisioner [b4b6ae677f54c214e1aa702323da6c036d36812caa7584c126d929055e42e2db] <==
	I0912 22:30:35.214708       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 22:30:35.228887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 22:30:35.228939       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 22:30:35.248351       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 22:30:35.248533       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-509957_a5cd57b8-d286-4c4b-af20-532f2e72ac4a!
	I0912 22:30:35.249591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e706af69-144e-47b5-85b8-96c57c6fa0f7", APIVersion:"v1", ResourceVersion:"559", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-509957_a5cd57b8-d286-4c4b-af20-532f2e72ac4a became leader
	I0912 22:30:35.350732       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-509957_a5cd57b8-d286-4c4b-af20-532f2e72ac4a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-509957 -n addons-509957
helpers_test.go:261: (dbg) Run:  kubectl --context addons-509957 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-4hzrl ingress-nginx-admission-patch-r9v74 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-509957 describe pod ingress-nginx-admission-create-4hzrl ingress-nginx-admission-patch-r9v74 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-509957 describe pod ingress-nginx-admission-create-4hzrl ingress-nginx-admission-patch-r9v74 test-job-nginx-0: exit status 1 (89.880396ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4hzrl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-r9v74" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-509957 describe pod ingress-nginx-admission-create-4hzrl ingress-nginx-admission-patch-r9v74 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (381.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-011723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0912 23:21:58.517352 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-011723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.220230253s)

                                                
                                                
-- stdout --
	* [old-k8s-version-011723] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-011723" primary control-plane node in "old-k8s-version-011723" cluster
	* Pulling base image v0.0.45-1726156396-19616 ...
	* Restarting existing docker container for "old-k8s-version-011723" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-011723 addons enable metrics-server
	
	* Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 23:21:38.890947 1805825 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:21:38.891126 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:21:38.891134 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:21:38.891139 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:21:38.891418 1805825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 23:21:38.891863 1805825 out.go:352] Setting JSON to false
	I0912 23:21:38.892855 1805825 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29026,"bootTime":1726154273,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 23:21:38.892986 1805825 start.go:139] virtualization:  
	I0912 23:21:38.895649 1805825 out.go:177] * [old-k8s-version-011723] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 23:21:38.898176 1805825 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:21:38.898235 1805825 notify.go:220] Checking for updates...
	I0912 23:21:38.901980 1805825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:21:38.903988 1805825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:21:38.905622 1805825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 23:21:38.907240 1805825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 23:21:38.909325 1805825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:21:38.911422 1805825 config.go:182] Loaded profile config "old-k8s-version-011723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0912 23:21:38.914169 1805825 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0912 23:21:38.915960 1805825 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:21:38.949352 1805825 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 23:21:38.949484 1805825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:21:39.037383 1805825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-12 23:21:39.026191066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:21:39.037501 1805825 docker.go:318] overlay module found
	I0912 23:21:39.039832 1805825 out.go:177] * Using the docker driver based on existing profile
	I0912 23:21:39.041676 1805825 start.go:297] selected driver: docker
	I0912 23:21:39.041691 1805825 start.go:901] validating driver "docker" against &{Name:old-k8s-version-011723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011723 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:21:39.041805 1805825 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:21:39.042474 1805825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:21:39.128829 1805825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-09-12 23:21:39.118558996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:21:39.129185 1805825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:21:39.129212 1805825 cni.go:84] Creating CNI manager for ""
	I0912 23:21:39.129220 1805825 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 23:21:39.129259 1805825 start.go:340] cluster config:
	{Name:old-k8s-version-011723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:21:39.131676 1805825 out.go:177] * Starting "old-k8s-version-011723" primary control-plane node in "old-k8s-version-011723" cluster
	I0912 23:21:39.133844 1805825 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0912 23:21:39.136055 1805825 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 23:21:39.138050 1805825 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 23:21:39.138136 1805825 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0912 23:21:39.138150 1805825 cache.go:56] Caching tarball of preloaded images
	I0912 23:21:39.138246 1805825 preload.go:172] Found /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 23:21:39.138268 1805825 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0912 23:21:39.138412 1805825 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/config.json ...
	I0912 23:21:39.138646 1805825 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	W0912 23:21:39.166036 1805825 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 is of wrong architecture
	I0912 23:21:39.166064 1805825 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 23:21:39.166183 1805825 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 23:21:39.166219 1805825 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 23:21:39.166229 1805825 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 23:21:39.166238 1805825 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 23:21:39.166243 1805825 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 23:21:39.306440 1805825 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 23:21:39.306477 1805825 cache.go:194] Successfully downloaded all kic artifacts
	I0912 23:21:39.306513 1805825 start.go:360] acquireMachinesLock for old-k8s-version-011723: {Name:mke0556fcf2dfec23d3db329fbb0d4d87739d4fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:21:39.306581 1805825 start.go:364] duration metric: took 42.215µs to acquireMachinesLock for "old-k8s-version-011723"
	I0912 23:21:39.306607 1805825 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:21:39.306616 1805825 fix.go:54] fixHost starting: 
	I0912 23:21:39.306895 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:39.367632 1805825 fix.go:112] recreateIfNeeded on old-k8s-version-011723: state=Stopped err=<nil>
	W0912 23:21:39.367659 1805825 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:21:39.370020 1805825 out.go:177] * Restarting existing docker container for "old-k8s-version-011723" ...
	I0912 23:21:39.371586 1805825 cli_runner.go:164] Run: docker start old-k8s-version-011723
	I0912 23:21:39.745707 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:39.769887 1805825 kic.go:430] container "old-k8s-version-011723" state is running.
	I0912 23:21:39.770309 1805825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-011723
	I0912 23:21:39.806794 1805825 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/config.json ...
	I0912 23:21:39.807023 1805825 machine.go:93] provisionDockerMachine start ...
	I0912 23:21:39.807099 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:39.837003 1805825 main.go:141] libmachine: Using SSH client type: native
	I0912 23:21:39.837267 1805825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34934 <nil> <nil>}
	I0912 23:21:39.837275 1805825 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:21:39.838942 1805825 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0912 23:21:42.987716 1805825 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-011723
	
	I0912 23:21:42.987742 1805825 ubuntu.go:169] provisioning hostname "old-k8s-version-011723"
	I0912 23:21:42.987817 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:43.009406 1805825 main.go:141] libmachine: Using SSH client type: native
	I0912 23:21:43.009686 1805825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34934 <nil> <nil>}
	I0912 23:21:43.009705 1805825 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-011723 && echo "old-k8s-version-011723" | sudo tee /etc/hostname
	I0912 23:21:43.164820 1805825 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-011723
	
	I0912 23:21:43.164927 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:43.184487 1805825 main.go:141] libmachine: Using SSH client type: native
	I0912 23:21:43.184737 1805825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34934 <nil> <nil>}
	I0912 23:21:43.184761 1805825 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-011723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-011723/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-011723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:21:43.327875 1805825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:21:43.327903 1805825 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-1592376/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-1592376/.minikube}
	I0912 23:21:43.327982 1805825 ubuntu.go:177] setting up certificates
	I0912 23:21:43.327992 1805825 provision.go:84] configureAuth start
	I0912 23:21:43.328082 1805825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-011723
	I0912 23:21:43.351828 1805825 provision.go:143] copyHostCerts
	I0912 23:21:43.351897 1805825 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem, removing ...
	I0912 23:21:43.351908 1805825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem
	I0912 23:21:43.351993 1805825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem (1082 bytes)
	I0912 23:21:43.352089 1805825 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem, removing ...
	I0912 23:21:43.352094 1805825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem
	I0912 23:21:43.352120 1805825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem (1123 bytes)
	I0912 23:21:43.352170 1805825 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem, removing ...
	I0912 23:21:43.352175 1805825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem
	I0912 23:21:43.352197 1805825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem (1675 bytes)
	I0912 23:21:43.352242 1805825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-011723 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-011723]
	I0912 23:21:43.550007 1805825 provision.go:177] copyRemoteCerts
	I0912 23:21:43.550113 1805825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:21:43.550202 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:43.573051 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:43.674416 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:21:43.699613 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0912 23:21:43.724271 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:21:43.748504 1805825 provision.go:87] duration metric: took 420.491239ms to configureAuth
	I0912 23:21:43.748535 1805825 ubuntu.go:193] setting minikube options for container-runtime
	I0912 23:21:43.748770 1805825 config.go:182] Loaded profile config "old-k8s-version-011723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0912 23:21:43.748785 1805825 machine.go:96] duration metric: took 3.941752167s to provisionDockerMachine
	I0912 23:21:43.748794 1805825 start.go:293] postStartSetup for "old-k8s-version-011723" (driver="docker")
	I0912 23:21:43.748805 1805825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:21:43.748861 1805825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:21:43.748907 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:43.765295 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:43.864941 1805825 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:21:43.868996 1805825 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 23:21:43.869030 1805825 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 23:21:43.869040 1805825 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 23:21:43.869047 1805825 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 23:21:43.869058 1805825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/addons for local assets ...
	I0912 23:21:43.869115 1805825 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/files for local assets ...
	I0912 23:21:43.869198 1805825 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem -> 15977602.pem in /etc/ssl/certs
	I0912 23:21:43.869320 1805825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:21:43.882279 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem --> /etc/ssl/certs/15977602.pem (1708 bytes)
	I0912 23:21:43.924880 1805825 start.go:296] duration metric: took 176.069334ms for postStartSetup
	I0912 23:21:43.924970 1805825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 23:21:43.925015 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:43.947161 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:44.049591 1805825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 23:21:44.054560 1805825 fix.go:56] duration metric: took 4.747937312s for fixHost
	I0912 23:21:44.054628 1805825 start.go:83] releasing machines lock for "old-k8s-version-011723", held for 4.748033181s
	I0912 23:21:44.054744 1805825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-011723
	I0912 23:21:44.083847 1805825 ssh_runner.go:195] Run: cat /version.json
	I0912 23:21:44.083907 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:44.084214 1805825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:21:44.084269 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:44.105067 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:44.112583 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:44.211310 1805825 ssh_runner.go:195] Run: systemctl --version
	I0912 23:21:44.372114 1805825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 23:21:44.377218 1805825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 23:21:44.402615 1805825 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 23:21:44.402729 1805825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:21:44.420423 1805825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 23:21:44.420457 1805825 start.go:495] detecting cgroup driver to use...
	I0912 23:21:44.420489 1805825 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 23:21:44.420573 1805825 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 23:21:44.444347 1805825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 23:21:44.468484 1805825 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:21:44.468606 1805825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:21:44.490027 1805825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:21:44.506251 1805825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:21:44.670128 1805825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:21:44.811005 1805825 docker.go:233] disabling docker service ...
	I0912 23:21:44.811153 1805825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:21:44.840274 1805825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:21:44.856871 1805825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:21:45.012001 1805825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:21:45.167076 1805825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:21:45.192342 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:21:45.237253 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0912 23:21:45.254953 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 23:21:45.268702 1805825 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 23:21:45.268915 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 23:21:45.291125 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 23:21:45.315046 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 23:21:45.330923 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 23:21:45.351175 1805825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:21:45.368195 1805825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 23:21:45.382082 1805825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:21:45.398388 1805825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:21:45.409137 1805825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:21:45.556627 1805825 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 23:21:45.822717 1805825 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0912 23:21:45.822862 1805825 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 23:21:45.826897 1805825 start.go:563] Will wait 60s for crictl version
	I0912 23:21:45.826965 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:21:45.840647 1805825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:21:45.918057 1805825 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0912 23:21:45.918215 1805825 ssh_runner.go:195] Run: containerd --version
	I0912 23:21:45.959798 1805825 ssh_runner.go:195] Run: containerd --version
	I0912 23:21:45.998942 1805825 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0912 23:21:46.001042 1805825 cli_runner.go:164] Run: docker network inspect old-k8s-version-011723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 23:21:46.032040 1805825 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0912 23:21:46.038398 1805825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:21:46.055429 1805825 kubeadm.go:883] updating cluster {Name:old-k8s-version-011723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011723 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:21:46.055557 1805825 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 23:21:46.055633 1805825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:21:46.130609 1805825 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 23:21:46.130638 1805825 containerd.go:534] Images already preloaded, skipping extraction
	I0912 23:21:46.130701 1805825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:21:46.195878 1805825 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 23:21:46.195901 1805825 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:21:46.195910 1805825 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0912 23:21:46.196036 1805825 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-011723 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:21:46.196106 1805825 ssh_runner.go:195] Run: sudo crictl info
	I0912 23:21:46.269895 1805825 cni.go:84] Creating CNI manager for ""
	I0912 23:21:46.269961 1805825 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 23:21:46.269984 1805825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:21:46.270017 1805825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-011723 NodeName:old-k8s-version-011723 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 23:21:46.270226 1805825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-011723"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:21:46.270315 1805825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0912 23:21:46.280973 1805825 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:21:46.281094 1805825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:21:46.304231 1805825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0912 23:21:46.331249 1805825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:21:46.365447 1805825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0912 23:21:46.392960 1805825 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0912 23:21:46.397948 1805825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:21:46.411335 1805825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:21:46.568064 1805825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:21:46.595784 1805825 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723 for IP: 192.168.76.2
	I0912 23:21:46.595855 1805825 certs.go:194] generating shared ca certs ...
	I0912 23:21:46.595886 1805825 certs.go:226] acquiring lock for ca certs: {Name:mk5b7cca91a053f0ec1ca9c487c600f7eefaa6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:21:46.596085 1805825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key
	I0912 23:21:46.596156 1805825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key
	I0912 23:21:46.596191 1805825 certs.go:256] generating profile certs ...
	I0912 23:21:46.596341 1805825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.key
	I0912 23:21:46.596445 1805825 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/apiserver.key.b23fdb97
	I0912 23:21:46.596509 1805825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/proxy-client.key
	I0912 23:21:46.596656 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760.pem (1338 bytes)
	W0912 23:21:46.596719 1805825 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760_empty.pem, impossibly tiny 0 bytes
	I0912 23:21:46.596744 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:21:46.596798 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:21:46.596847 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:21:46.596900 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem (1675 bytes)
	I0912 23:21:46.596980 1805825 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem (1708 bytes)
	I0912 23:21:46.597717 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:21:46.697569 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:21:46.774306 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:21:46.838650 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:21:46.887456 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0912 23:21:46.916077 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 23:21:46.953731 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:21:46.996489 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 23:21:47.049121 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem --> /usr/share/ca-certificates/15977602.pem (1708 bytes)
	I0912 23:21:47.089641 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:21:47.132266 1805825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760.pem --> /usr/share/ca-certificates/1597760.pem (1338 bytes)
	I0912 23:21:47.171808 1805825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:21:47.197898 1805825 ssh_runner.go:195] Run: openssl version
	I0912 23:21:47.205253 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:21:47.217755 1805825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:21:47.222052 1805825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 22:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:21:47.222172 1805825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:21:47.240252 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:21:47.266541 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597760.pem && ln -fs /usr/share/ca-certificates/1597760.pem /etc/ssl/certs/1597760.pem"
	I0912 23:21:47.280316 1805825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597760.pem
	I0912 23:21:47.284193 1805825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 22:41 /usr/share/ca-certificates/1597760.pem
	I0912 23:21:47.284296 1805825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597760.pem
	I0912 23:21:47.293344 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597760.pem /etc/ssl/certs/51391683.0"
	I0912 23:21:47.303956 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15977602.pem && ln -fs /usr/share/ca-certificates/15977602.pem /etc/ssl/certs/15977602.pem"
	I0912 23:21:47.313453 1805825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15977602.pem
	I0912 23:21:47.317250 1805825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 22:41 /usr/share/ca-certificates/15977602.pem
	I0912 23:21:47.317336 1805825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15977602.pem
	I0912 23:21:47.324530 1805825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15977602.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:21:47.337564 1805825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:21:47.341689 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:21:47.348929 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:21:47.360059 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:21:47.372499 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:21:47.380447 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:21:47.392556 1805825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:21:47.403689 1805825 kubeadm.go:392] StartCluster: {Name:old-k8s-version-011723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-011723 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:21:47.403812 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0912 23:21:47.403901 1805825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:21:47.465033 1805825 cri.go:89] found id: "91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:21:47.465057 1805825 cri.go:89] found id: "c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:21:47.465063 1805825 cri.go:89] found id: "2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:21:47.465075 1805825 cri.go:89] found id: "f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:21:47.465078 1805825 cri.go:89] found id: "a39a9cadf13eda3be1ada9a90d5ba41f45987002f8b6bc8a8cb57f06dd3c71c5"
	I0912 23:21:47.465082 1805825 cri.go:89] found id: "5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:21:47.465085 1805825 cri.go:89] found id: "ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:21:47.465088 1805825 cri.go:89] found id: "e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:21:47.465091 1805825 cri.go:89] found id: "f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:21:47.465099 1805825 cri.go:89] found id: ""
	I0912 23:21:47.465176 1805825 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0912 23:21:47.485486 1805825 cri.go:116] JSON = null
	W0912 23:21:47.485549 1805825 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0912 23:21:47.485620 1805825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:21:47.494932 1805825 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:21:47.494953 1805825 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:21:47.495014 1805825 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:21:47.503587 1805825 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:21:47.504101 1805825 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-011723" does not appear in /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:21:47.504247 1805825 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-1592376/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-011723" cluster setting kubeconfig missing "old-k8s-version-011723" context setting]
	I0912 23:21:47.504607 1805825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/kubeconfig: {Name:mk20814b10c438de6fa8214738e210df331cf1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:21:47.506168 1805825 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:21:47.515196 1805825 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0912 23:21:47.515240 1805825 kubeadm.go:597] duration metric: took 20.27097ms to restartPrimaryControlPlane
	I0912 23:21:47.515250 1805825 kubeadm.go:394] duration metric: took 111.57583ms to StartCluster
	I0912 23:21:47.515268 1805825 settings.go:142] acquiring lock: {Name:mk1fdbbc4ffc0e3fc6419399beeda4839e1c5a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:21:47.515339 1805825 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:21:47.516043 1805825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/kubeconfig: {Name:mk20814b10c438de6fa8214738e210df331cf1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:21:47.516275 1805825 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 23:21:47.516618 1805825 config.go:182] Loaded profile config "old-k8s-version-011723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0912 23:21:47.516668 1805825 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:21:47.516765 1805825 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-011723"
	I0912 23:21:47.516786 1805825 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-011723"
	I0912 23:21:47.516794 1805825 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-011723"
	W0912 23:21:47.516801 1805825 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:21:47.516814 1805825 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-011723"
	I0912 23:21:47.516822 1805825 host.go:66] Checking if "old-k8s-version-011723" exists ...
	I0912 23:21:47.517112 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:47.517595 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:47.517967 1805825 addons.go:69] Setting dashboard=true in profile "old-k8s-version-011723"
	I0912 23:21:47.518009 1805825 addons.go:234] Setting addon dashboard=true in "old-k8s-version-011723"
	W0912 23:21:47.518027 1805825 addons.go:243] addon dashboard should already be in state true
	I0912 23:21:47.518053 1805825 host.go:66] Checking if "old-k8s-version-011723" exists ...
	I0912 23:21:47.518458 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:47.520181 1805825 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-011723"
	I0912 23:21:47.520212 1805825 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-011723"
	W0912 23:21:47.520219 1805825 addons.go:243] addon metrics-server should already be in state true
	I0912 23:21:47.520245 1805825 host.go:66] Checking if "old-k8s-version-011723" exists ...
	I0912 23:21:47.520698 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:47.521225 1805825 out.go:177] * Verifying Kubernetes components...
	I0912 23:21:47.522895 1805825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:21:47.577862 1805825 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:21:47.579858 1805825 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:21:47.579893 1805825 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:21:47.579978 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:47.593553 1805825 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0912 23:21:47.594359 1805825 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-011723"
	W0912 23:21:47.594379 1805825 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:21:47.594403 1805825 host.go:66] Checking if "old-k8s-version-011723" exists ...
	I0912 23:21:47.594826 1805825 cli_runner.go:164] Run: docker container inspect old-k8s-version-011723 --format={{.State.Status}}
	I0912 23:21:47.599792 1805825 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0912 23:21:47.602468 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0912 23:21:47.602505 1805825 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0912 23:21:47.602577 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:47.602710 1805825 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:21:47.607879 1805825 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:47.607902 1805825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:21:47.607970 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:47.646557 1805825 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:21:47.646580 1805825 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:21:47.646643 1805825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-011723
	I0912 23:21:47.652027 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:47.674870 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:47.694842 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:47.704310 1805825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34934 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/old-k8s-version-011723/id_rsa Username:docker}
	I0912 23:21:47.751852 1805825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:21:47.817615 1805825 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-011723" to be "Ready" ...
	I0912 23:21:47.889174 1805825 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:21:47.889208 1805825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:21:47.922659 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0912 23:21:47.922690 1805825 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0912 23:21:47.954852 1805825 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:21:47.954881 1805825 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 23:21:47.968520 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:21:48.001928 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:48.006294 1805825 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:48.006337 1805825 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:21:48.026291 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0912 23:21:48.026332 1805825 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0912 23:21:48.110899 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:48.118774 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0912 23:21:48.118803 1805825 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0912 23:21:48.241972 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0912 23:21:48.241996 1805825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0912 23:21:48.380708 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.380760 1805825 retry.go:31] will retry after 367.994911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:48.381681 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.381704 1805825 retry.go:31] will retry after 324.286224ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.401643 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0912 23:21:48.401689 1805825 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0912 23:21:48.453598 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0912 23:21:48.453642 1805825 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0912 23:21:48.458673 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.458714 1805825 retry.go:31] will retry after 317.821356ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.476660 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0912 23:21:48.476703 1805825 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0912 23:21:48.518324 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0912 23:21:48.518371 1805825 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0912 23:21:48.544612 1805825 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0912 23:21:48.544639 1805825 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0912 23:21:48.564754 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:48.668219 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.668253 1805825 retry.go:31] will retry after 283.447867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.706429 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:48.749818 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:21:48.777179 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:48.952384 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:48.973383 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:48.973418 1805825 retry.go:31] will retry after 264.60019ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:49.045737 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:49.045789 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.045805 1805825 retry.go:31] will retry after 555.894279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.045825 1805825 retry.go:31] will retry after 220.174896ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:49.148450 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.148486 1805825 retry.go:31] will retry after 553.997483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.238780 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:49.267210 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0912 23:21:49.380232 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.380277 1805825 retry.go:31] will retry after 465.301688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:49.425270 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.425313 1805825 retry.go:31] will retry after 806.593407ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.602706 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:49.703162 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:49.747906 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.747952 1805825 retry.go:31] will retry after 727.26024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.818684 1805825 node_ready.go:53] error getting node "old-k8s-version-011723": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-011723": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 23:21:49.845873 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0912 23:21:49.884101 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.884150 1805825 retry.go:31] will retry after 686.280064ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:49.998378 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:49.998422 1805825 retry.go:31] will retry after 473.002247ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.232925 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0912 23:21:50.331938 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.331972 1805825 retry.go:31] will retry after 1.123653662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.472370 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:50.475653 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:50.570810 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:50.603305 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.603346 1805825 retry.go:31] will retry after 1.465769992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:50.649493 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.649528 1805825 retry.go:31] will retry after 763.852406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:50.684655 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:50.684689 1805825 retry.go:31] will retry after 647.993812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:51.333365 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0912 23:21:51.413750 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:51.456005 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0912 23:21:51.473322 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:51.473402 1805825 retry.go:31] will retry after 736.248769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:51.554377 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:51.554462 1805825 retry.go:31] will retry after 1.446233953s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0912 23:21:51.584461 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:51.584494 1805825 retry.go:31] will retry after 1.637572967s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:52.069675 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0912 23:21:52.168739 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:52.168772 1805825 retry.go:31] will retry after 1.440286436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:52.209994 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:52.288169 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:52.288241 1805825 retry.go:31] will retry after 2.44389615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:52.318744 1805825 node_ready.go:53] error getting node "old-k8s-version-011723": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-011723": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 23:21:53.001904 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0912 23:21:53.090150 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:53.090190 1805825 retry.go:31] will retry after 979.283008ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:53.222271 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0912 23:21:53.313223 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:53.313263 1805825 retry.go:31] will retry after 1.959626442s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:53.610137 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0912 23:21:53.757077 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:53.757159 1805825 retry.go:31] will retry after 2.839734726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:54.070581 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0912 23:21:54.163478 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:54.163590 1805825 retry.go:31] will retry after 4.15322071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:54.319188 1805825 node_ready.go:53] error getting node "old-k8s-version-011723": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-011723": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 23:21:54.732388 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0912 23:21:54.805251 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:54.805309 1805825 retry.go:31] will retry after 2.322702818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:55.273268 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0912 23:21:55.352955 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:55.352988 1805825 retry.go:31] will retry after 3.819769319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0912 23:21:56.597696 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:21:57.128607 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0912 23:21:58.317001 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:21:59.173261 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:22:06.319617 1805825 node_ready.go:53] error getting node "old-k8s-version-011723": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-011723": net/http: TLS handshake timeout
	I0912 23:22:06.813399 1805825 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.215623892s)
	W0912 23:22:06.813431 1805825 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0912 23:22:06.813448 1805825 retry.go:31] will retry after 5.944328604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0912 23:22:07.480812 1805825 node_ready.go:49] node "old-k8s-version-011723" has status "Ready":"True"
	I0912 23:22:07.480838 1805825 node_ready.go:38] duration metric: took 19.663179653s for node "old-k8s-version-011723" to be "Ready" ...
	I0912 23:22:07.480849 1805825 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:22:07.571300 1805825 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace to be "Ready" ...
	I0912 23:22:08.691647 1805825 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.562977276s)
	I0912 23:22:08.692016 1805825 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.374984775s)
	I0912 23:22:08.692076 1805825 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-011723"
	I0912 23:22:08.692148 1805825 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.518866082s)
	I0912 23:22:08.693671 1805825 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-011723 addons enable metrics-server
	
	I0912 23:22:09.578080 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:11.579414 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:12.757956 1805825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:22:13.159084 1805825 out.go:177] * Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
	I0912 23:22:13.161032 1805825 addons.go:510] duration metric: took 25.64435318s for enable addons: enabled=[metrics-server dashboard default-storageclass storage-provisioner]
	I0912 23:22:14.082930 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:16.578216 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:18.579158 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:21.078220 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:23.081750 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:25.607191 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:28.080856 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:30.083681 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:32.578903 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:34.579141 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:37.078928 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:39.579194 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:42.079359 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:44.577986 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:47.079427 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:49.577063 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:51.581169 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:52.578988 1805825 pod_ready.go:93] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"True"
	I0912 23:22:52.579010 1805825 pod_ready.go:82] duration metric: took 45.007601623s for pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace to be "Ready" ...
	I0912 23:22:52.579022 1805825 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:22:54.585950 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:57.086582 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:59.091309 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:01.584645 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:03.584803 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:05.586373 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:07.703232 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:10.092109 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:12.586070 1805825 pod_ready.go:93] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:12.586091 1805825 pod_ready.go:82] duration metric: took 20.007061143s for pod "etcd-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.586105 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.592877 1805825 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:12.592899 1805825 pod_ready.go:82] duration metric: took 6.785661ms for pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.592910 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:14.599464 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:16.600010 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:18.600119 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:21.099631 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:23.598727 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:25.599494 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:28.099402 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:30.108642 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:31.599271 1805825 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.599300 1805825 pod_ready.go:82] duration metric: took 19.006382161s for pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.599314 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cd4m4" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.604530 1805825 pod_ready.go:93] pod "kube-proxy-cd4m4" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.604558 1805825 pod_ready.go:82] duration metric: took 5.236086ms for pod "kube-proxy-cd4m4" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.604570 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.609948 1805825 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.609978 1805825 pod_ready.go:82] duration metric: took 5.399884ms for pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.609991 1805825 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:33.616901 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:36.117406 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:38.616767 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:41.116618 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:43.117109 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:45.119017 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:47.615873 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:49.616264 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:51.616446 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:54.117042 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:56.617982 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:58.618639 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:01.116490 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:03.615956 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:05.616291 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:07.616383 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:10.118971 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:12.615907 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:14.616327 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:16.616660 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:18.618315 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:21.116721 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:23.120405 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:25.615987 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:27.616285 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:29.616736 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:32.116368 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:34.615597 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:36.615875 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:39.116734 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:41.616349 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:43.616829 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:46.116401 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:48.116475 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:50.116632 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:52.616460 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:55.116563 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:57.116939 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:59.615938 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:01.616962 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:04.117033 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:06.615106 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:08.697132 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:11.117036 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:13.615834 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:15.616644 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:18.118163 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:20.616069 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:23.116443 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:25.118089 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:27.618412 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:30.118003 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:32.615831 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:34.616187 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:36.616671 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:39.116087 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:41.615833 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:43.615886 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:45.616059 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:48.116800 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:50.616480 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:53.115634 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:55.117262 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:57.615811 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:59.617089 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:01.617125 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:04.116539 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:06.615639 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:08.616084 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:10.616278 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:13.116566 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:15.118162 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:17.616104 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:19.616529 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:21.617541 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:24.116954 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:26.618594 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:29.135672 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:31.616184 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:34.117097 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:36.616730 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:39.115929 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:41.116884 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:43.116963 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:45.118983 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:47.617481 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:49.623855 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:52.116732 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:54.125647 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:56.615873 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:58.625134 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:01.117368 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:03.615687 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:05.616841 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:07.617150 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:10.117503 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:12.617018 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:15.117143 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:17.120989 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:19.616374 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:22.117190 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:24.615625 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:26.615774 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:28.619296 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:30.620004 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:31.616410 1805825 pod_ready.go:82] duration metric: took 4m0.006405602s for pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace to be "Ready" ...
	E0912 23:27:31.616439 1805825 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:27:31.616449 1805825 pod_ready.go:39] duration metric: took 5m24.135588484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:27:31.616463 1805825 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:27:31.616491 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:31.616558 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:31.656235 1805825 cri.go:89] found id: "e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:31.656258 1805825 cri.go:89] found id: "5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:31.656263 1805825 cri.go:89] found id: ""
	I0912 23:27:31.656270 1805825 logs.go:276] 2 containers: [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a]
	I0912 23:27:31.656357 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.659874 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.663300 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:31.663382 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:31.701173 1805825 cri.go:89] found id: "acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:31.701208 1805825 cri.go:89] found id: "f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:31.701213 1805825 cri.go:89] found id: ""
	I0912 23:27:31.701224 1805825 logs.go:276] 2 containers: [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab]
	I0912 23:27:31.701387 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.704920 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.712336 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:31.712435 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:31.749195 1805825 cri.go:89] found id: "761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:31.749217 1805825 cri.go:89] found id: "c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:31.749222 1805825 cri.go:89] found id: ""
	I0912 23:27:31.749229 1805825 logs.go:276] 2 containers: [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28]
	I0912 23:27:31.749310 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.753118 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.756544 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:31.756618 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:31.795469 1805825 cri.go:89] found id: "ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:31.795545 1805825 cri.go:89] found id: "e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:31.795564 1805825 cri.go:89] found id: ""
	I0912 23:27:31.795588 1805825 logs.go:276] 2 containers: [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9]
	I0912 23:27:31.795669 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.799094 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.802491 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:31.802601 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:31.839752 1805825 cri.go:89] found id: "94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:31.839776 1805825 cri.go:89] found id: "f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:31.839781 1805825 cri.go:89] found id: ""
	I0912 23:27:31.839789 1805825 logs.go:276] 2 containers: [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71]
	I0912 23:27:31.839873 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.843319 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.846600 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:31.846683 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:31.887977 1805825 cri.go:89] found id: "e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:31.888039 1805825 cri.go:89] found id: "ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:31.888060 1805825 cri.go:89] found id: ""
	I0912 23:27:31.888089 1805825 logs.go:276] 2 containers: [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99]
	I0912 23:27:31.888195 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.891769 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.895304 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:31.895389 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:31.933166 1805825 cri.go:89] found id: "f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:31.933244 1805825 cri.go:89] found id: "2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:31.933257 1805825 cri.go:89] found id: ""
	I0912 23:27:31.933266 1805825 logs.go:276] 2 containers: [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade]
	I0912 23:27:31.933335 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.936945 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.940365 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:31.940485 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:31.981027 1805825 cri.go:89] found id: "3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:31.981107 1805825 cri.go:89] found id: ""
	I0912 23:27:31.981131 1805825 logs.go:276] 1 containers: [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8]
	I0912 23:27:31.981229 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.985423 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:31.985539 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:32.033051 1805825 cri.go:89] found id: "1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:32.033072 1805825 cri.go:89] found id: "91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:32.033077 1805825 cri.go:89] found id: ""
	I0912 23:27:32.033084 1805825 logs.go:276] 2 containers: [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299]
	I0912 23:27:32.033166 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:32.036867 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:32.040391 1805825 logs.go:123] Gathering logs for kube-proxy [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168] ...
	I0912 23:27:32.040424 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:32.086447 1805825 logs.go:123] Gathering logs for container status ...
	I0912 23:27:32.086476 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:32.136905 1805825 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:32.137006 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:32.190946 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411026     664 reflector.go:138] object-"kube-system"/"coredns-token-m2js8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2js8" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191190 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411278     664 reflector.go:138] object-"kube-system"/"kindnet-token-xzbvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xzbvw" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191408 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411446     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411664     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-k7dwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-k7dwz" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191851 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411960     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192082 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414071     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7cln": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7cln" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192298 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414335     664 reflector.go:138] object-"default"/"default-token-bxtgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bxtgn" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192523 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414403     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fdc76": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fdc76" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.199031 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.557235     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.200921 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.840628     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.205210 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.466661     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.206993 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.472436     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.208452 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.366884     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.210201 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.463324     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.212088 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.478847     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.213664 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.482852     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.214501 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.566317     664 pod_workers.go:191] Error syncing pod 7a260c8b-3e99-476d-bb2a-f42a54017c50 ("busybox_default(7a260c8b-3e99-476d-bb2a-f42a54017c50)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.217039 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.591024     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.217368 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:12 old-k8s-version-011723 kubelet[664]: E0912 23:22:12.488949     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.222297 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:34 old-k8s-version-011723 kubelet[664]: E0912 23:22:34.530645     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.223276 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:36 old-k8s-version-011723 kubelet[664]: E0912 23:22:36.632305     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.223614 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:37 old-k8s-version-011723 kubelet[664]: E0912 23:22:37.632313     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.223957 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:44 old-k8s-version-011723 kubelet[664]: E0912 23:22:44.683335     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.224486 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:47 old-k8s-version-011723 kubelet[664]: E0912 23:22:47.274232     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.225089 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:58 old-k8s-version-011723 kubelet[664]: E0912 23:22:58.684667     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.227589 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:00 old-k8s-version-011723 kubelet[664]: E0912 23:23:00.296836     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.227925 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:04 old-k8s-version-011723 kubelet[664]: E0912 23:23:04.683877     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.228115 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:12 old-k8s-version-011723 kubelet[664]: E0912 23:23:12.274275     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.228446 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:16 old-k8s-version-011723 kubelet[664]: E0912 23:23:16.273507     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.228639 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:23 old-k8s-version-011723 kubelet[664]: E0912 23:23:23.274327     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.229234 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:29 old-k8s-version-011723 kubelet[664]: E0912 23:23:29.784506     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.229569 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:34 old-k8s-version-011723 kubelet[664]: E0912 23:23:34.683510     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.229761 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:38 old-k8s-version-011723 kubelet[664]: E0912 23:23:38.273957     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.230108 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:50 old-k8s-version-011723 kubelet[664]: E0912 23:23:50.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.232628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:52 old-k8s-version-011723 kubelet[664]: E0912 23:23:52.283800     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.232968 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:01 old-k8s-version-011723 kubelet[664]: E0912 23:24:01.274256     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.233157 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:04 old-k8s-version-011723 kubelet[664]: E0912 23:24:04.274131     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.233766 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:14 old-k8s-version-011723 kubelet[664]: E0912 23:24:14.908668     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.233955 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:15 old-k8s-version-011723 kubelet[664]: E0912 23:24:15.282101     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.234302 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:24 old-k8s-version-011723 kubelet[664]: E0912 23:24:24.683372     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.234497 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:27 old-k8s-version-011723 kubelet[664]: E0912 23:24:27.273932     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.234835 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:40 old-k8s-version-011723 kubelet[664]: E0912 23:24:40.274451     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.235026 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:42 old-k8s-version-011723 kubelet[664]: E0912 23:24:42.274046     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.235222 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:53 old-k8s-version-011723 kubelet[664]: E0912 23:24:53.273989     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.235556 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:54 old-k8s-version-011723 kubelet[664]: E0912 23:24:54.273546     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.235753 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:04 old-k8s-version-011723 kubelet[664]: E0912 23:25:04.273901     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.236095 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:05 old-k8s-version-011723 kubelet[664]: E0912 23:25:05.274360     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.238700 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:15 old-k8s-version-011723 kubelet[664]: E0912 23:25:15.305120     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.239058 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:19 old-k8s-version-011723 kubelet[664]: E0912 23:25:19.274155     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.239305 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:28 old-k8s-version-011723 kubelet[664]: E0912 23:25:28.273983     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.239641 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:30 old-k8s-version-011723 kubelet[664]: E0912 23:25:30.273912     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.239839 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:39 old-k8s-version-011723 kubelet[664]: E0912 23:25:39.277598     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.240445 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:43 old-k8s-version-011723 kubelet[664]: E0912 23:25:43.182848     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.240777 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:44 old-k8s-version-011723 kubelet[664]: E0912 23:25:44.683303     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.240968 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:51 old-k8s-version-011723 kubelet[664]: E0912 23:25:51.278776     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.241306 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:58 old-k8s-version-011723 kubelet[664]: E0912 23:25:58.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.241496 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:03 old-k8s-version-011723 kubelet[664]: E0912 23:26:03.278105     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.241833 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:10 old-k8s-version-011723 kubelet[664]: E0912 23:26:10.273581     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.242022 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:14 old-k8s-version-011723 kubelet[664]: E0912 23:26:14.273888     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.242356 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: E0912 23:26:22.273505     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.242547 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:27 old-k8s-version-011723 kubelet[664]: E0912 23:26:27.278136     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.242883 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: E0912 23:26:36.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.243073 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:38 old-k8s-version-011723 kubelet[664]: E0912 23:26:38.273832     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.243267 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:49 old-k8s-version-011723 kubelet[664]: E0912 23:26:49.273962     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.243600 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: E0912 23:26:50.273517     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.243797 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.244128 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.244318 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.244672 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.244861 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:32.244873 1805825 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:32.244888 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:32.263536 1805825 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:32.263566 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:32.418067 1805825 logs.go:123] Gathering logs for kube-apiserver [5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a] ...
	I0912 23:27:32.418100 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:32.472888 1805825 logs.go:123] Gathering logs for etcd [f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab] ...
	I0912 23:27:32.472923 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:32.521854 1805825 logs.go:123] Gathering logs for kube-scheduler [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb] ...
	I0912 23:27:32.521887 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:32.574517 1805825 logs.go:123] Gathering logs for kube-apiserver [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b] ...
	I0912 23:27:32.574547 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:32.634129 1805825 logs.go:123] Gathering logs for coredns [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623] ...
	I0912 23:27:32.634165 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:32.680668 1805825 logs.go:123] Gathering logs for kindnet [2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade] ...
	I0912 23:27:32.680696 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:32.718806 1805825 logs.go:123] Gathering logs for storage-provisioner [91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299] ...
	I0912 23:27:32.718833 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:32.764018 1805825 logs.go:123] Gathering logs for etcd [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa] ...
	I0912 23:27:32.764051 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:32.815016 1805825 logs.go:123] Gathering logs for coredns [c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28] ...
	I0912 23:27:32.815050 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:32.860482 1805825 logs.go:123] Gathering logs for kube-scheduler [e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9] ...
	I0912 23:27:32.860511 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:32.903923 1805825 logs.go:123] Gathering logs for kube-proxy [f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71] ...
	I0912 23:27:32.903952 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:32.947569 1805825 logs.go:123] Gathering logs for kube-controller-manager [ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99] ...
	I0912 23:27:32.947601 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:33.028599 1805825 logs.go:123] Gathering logs for kindnet [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64] ...
	I0912 23:27:33.028643 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:33.070980 1805825 logs.go:123] Gathering logs for kube-controller-manager [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340] ...
	I0912 23:27:33.071011 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:33.133140 1805825 logs.go:123] Gathering logs for kubernetes-dashboard [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8] ...
	I0912 23:27:33.133176 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:33.185246 1805825 logs.go:123] Gathering logs for storage-provisioner [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9] ...
	I0912 23:27:33.185279 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:33.225304 1805825 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:33.225338 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:33.298930 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:33.298966 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:33.299025 1805825 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 23:27:33.299037 1805825 out.go:270]   Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:33.299044 1805825 out.go:270]   Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	  Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:33.299070 1805825 out.go:270]   Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:33.299077 1805825 out.go:270]   Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	  Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:33.299086 1805825 out.go:270]   Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:33.299091 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:33.299097 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:43.301335 1805825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:27:43.313968 1805825 api_server.go:72] duration metric: took 5m55.797654969s to wait for apiserver process to appear ...
	I0912 23:27:43.313998 1805825 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:27:43.314038 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:43.314095 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:43.353269 1805825 cri.go:89] found id: "e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:43.353292 1805825 cri.go:89] found id: "5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:43.353297 1805825 cri.go:89] found id: ""
	I0912 23:27:43.353305 1805825 logs.go:276] 2 containers: [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a]
	I0912 23:27:43.353363 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.357194 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.360691 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:43.360764 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:43.409115 1805825 cri.go:89] found id: "acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:43.409135 1805825 cri.go:89] found id: "f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:43.409140 1805825 cri.go:89] found id: ""
	I0912 23:27:43.409148 1805825 logs.go:276] 2 containers: [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab]
	I0912 23:27:43.409205 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.414385 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.418038 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:43.418103 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:43.470267 1805825 cri.go:89] found id: "761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:43.470287 1805825 cri.go:89] found id: "c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:43.470292 1805825 cri.go:89] found id: ""
	I0912 23:27:43.470299 1805825 logs.go:276] 2 containers: [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28]
	I0912 23:27:43.470361 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.474528 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.478036 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:43.478114 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:43.520737 1805825 cri.go:89] found id: "ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:43.520766 1805825 cri.go:89] found id: "e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:43.520773 1805825 cri.go:89] found id: ""
	I0912 23:27:43.520780 1805825 logs.go:276] 2 containers: [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9]
	I0912 23:27:43.520840 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.524564 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.528472 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:43.528544 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:43.575684 1805825 cri.go:89] found id: "94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:43.575760 1805825 cri.go:89] found id: "f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:43.575767 1805825 cri.go:89] found id: ""
	I0912 23:27:43.575775 1805825 logs.go:276] 2 containers: [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71]
	I0912 23:27:43.575853 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.579573 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.582968 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:43.583040 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:43.622197 1805825 cri.go:89] found id: "e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:43.622232 1805825 cri.go:89] found id: "ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:43.622242 1805825 cri.go:89] found id: ""
	I0912 23:27:43.622267 1805825 logs.go:276] 2 containers: [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99]
	I0912 23:27:43.622346 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.625917 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.629579 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:43.629657 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:43.672137 1805825 cri.go:89] found id: "f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:43.672160 1805825 cri.go:89] found id: "2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:43.672166 1805825 cri.go:89] found id: ""
	I0912 23:27:43.672174 1805825 logs.go:276] 2 containers: [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade]
	I0912 23:27:43.672232 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.675927 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.680672 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:43.680740 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:43.718009 1805825 cri.go:89] found id: "3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:43.718034 1805825 cri.go:89] found id: ""
	I0912 23:27:43.718042 1805825 logs.go:276] 1 containers: [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8]
	I0912 23:27:43.718099 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.721672 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:43.721750 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:43.759833 1805825 cri.go:89] found id: "1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:43.759855 1805825 cri.go:89] found id: "91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:43.759860 1805825 cri.go:89] found id: ""
	I0912 23:27:43.759867 1805825 logs.go:276] 2 containers: [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299]
	I0912 23:27:43.759939 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.763575 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.767207 1805825 logs.go:123] Gathering logs for kubernetes-dashboard [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8] ...
	I0912 23:27:43.767286 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:43.812189 1805825 logs.go:123] Gathering logs for etcd [f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab] ...
	I0912 23:27:43.812262 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:43.860473 1805825 logs.go:123] Gathering logs for coredns [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623] ...
	I0912 23:27:43.860501 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:43.901572 1805825 logs.go:123] Gathering logs for kube-proxy [f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71] ...
	I0912 23:27:43.901599 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:43.941592 1805825 logs.go:123] Gathering logs for kube-controller-manager [ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99] ...
	I0912 23:27:43.941623 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:44.000715 1805825 logs.go:123] Gathering logs for kindnet [2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade] ...
	I0912 23:27:44.000754 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:44.049375 1805825 logs.go:123] Gathering logs for storage-provisioner [91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299] ...
	I0912 23:27:44.049402 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:44.095108 1805825 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:44.095139 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:44.155926 1805825 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:44.155961 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:44.207364 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411026     664 reflector.go:138] object-"kube-system"/"coredns-token-m2js8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2js8" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.207620 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411278     664 reflector.go:138] object-"kube-system"/"kindnet-token-xzbvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xzbvw" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.207833 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411446     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208051 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411664     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-k7dwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-k7dwz" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208258 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411960     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208485 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414071     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7cln": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7cln" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208697 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414335     664 reflector.go:138] object-"default"/"default-token-bxtgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bxtgn" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208921 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414403     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fdc76": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fdc76" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.215368 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.557235     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.217198 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.840628     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.221475 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.466661     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.223249 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.472436     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.224648 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.366884     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.226290 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.463324     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.228181 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.478847     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.229717 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.482852     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.230538 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.566317     664 pod_workers.go:191] Error syncing pod 7a260c8b-3e99-476d-bb2a-f42a54017c50 ("busybox_default(7a260c8b-3e99-476d-bb2a-f42a54017c50)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.233034 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.591024     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.233356 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:12 old-k8s-version-011723 kubelet[664]: E0912 23:22:12.488949     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.238182 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:34 old-k8s-version-011723 kubelet[664]: E0912 23:22:34.530645     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.239114 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:36 old-k8s-version-011723 kubelet[664]: E0912 23:22:36.632305     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.239454 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:37 old-k8s-version-011723 kubelet[664]: E0912 23:22:37.632313     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.239792 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:44 old-k8s-version-011723 kubelet[664]: E0912 23:22:44.683335     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.240321 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:47 old-k8s-version-011723 kubelet[664]: E0912 23:22:47.274232     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.240914 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:58 old-k8s-version-011723 kubelet[664]: E0912 23:22:58.684667     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.243409 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:00 old-k8s-version-011723 kubelet[664]: E0912 23:23:00.296836     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.243745 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:04 old-k8s-version-011723 kubelet[664]: E0912 23:23:04.683877     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.243932 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:12 old-k8s-version-011723 kubelet[664]: E0912 23:23:12.274275     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.244259 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:16 old-k8s-version-011723 kubelet[664]: E0912 23:23:16.273507     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.244448 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:23 old-k8s-version-011723 kubelet[664]: E0912 23:23:23.274327     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.245036 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:29 old-k8s-version-011723 kubelet[664]: E0912 23:23:29.784506     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.245365 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:34 old-k8s-version-011723 kubelet[664]: E0912 23:23:34.683510     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.245553 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:38 old-k8s-version-011723 kubelet[664]: E0912 23:23:38.273957     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.245882 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:50 old-k8s-version-011723 kubelet[664]: E0912 23:23:50.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.248381 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:52 old-k8s-version-011723 kubelet[664]: E0912 23:23:52.283800     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.248855 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:01 old-k8s-version-011723 kubelet[664]: E0912 23:24:01.274256     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.250523 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:04 old-k8s-version-011723 kubelet[664]: E0912 23:24:04.274131     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.251141 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:14 old-k8s-version-011723 kubelet[664]: E0912 23:24:14.908668     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.251337 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:15 old-k8s-version-011723 kubelet[664]: E0912 23:24:15.282101     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.251674 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:24 old-k8s-version-011723 kubelet[664]: E0912 23:24:24.683372     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.251898 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:27 old-k8s-version-011723 kubelet[664]: E0912 23:24:27.273932     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252232 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:40 old-k8s-version-011723 kubelet[664]: E0912 23:24:40.274451     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.252419 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:42 old-k8s-version-011723 kubelet[664]: E0912 23:24:42.274046     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252606 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:53 old-k8s-version-011723 kubelet[664]: E0912 23:24:53.273989     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252934 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:54 old-k8s-version-011723 kubelet[664]: E0912 23:24:54.273546     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.253120 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:04 old-k8s-version-011723 kubelet[664]: E0912 23:25:04.273901     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.253473 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:05 old-k8s-version-011723 kubelet[664]: E0912 23:25:05.274360     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.255953 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:15 old-k8s-version-011723 kubelet[664]: E0912 23:25:15.305120     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.256289 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:19 old-k8s-version-011723 kubelet[664]: E0912 23:25:19.274155     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.256481 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:28 old-k8s-version-011723 kubelet[664]: E0912 23:25:28.273983     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.256812 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:30 old-k8s-version-011723 kubelet[664]: E0912 23:25:30.273912     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.256999 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:39 old-k8s-version-011723 kubelet[664]: E0912 23:25:39.277598     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.257586 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:43 old-k8s-version-011723 kubelet[664]: E0912 23:25:43.182848     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.257913 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:44 old-k8s-version-011723 kubelet[664]: E0912 23:25:44.683303     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.258101 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:51 old-k8s-version-011723 kubelet[664]: E0912 23:25:51.278776     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.258443 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:58 old-k8s-version-011723 kubelet[664]: E0912 23:25:58.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.258628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:03 old-k8s-version-011723 kubelet[664]: E0912 23:26:03.278105     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.258985 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:10 old-k8s-version-011723 kubelet[664]: E0912 23:26:10.273581     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.259180 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:14 old-k8s-version-011723 kubelet[664]: E0912 23:26:14.273888     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.259511 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: E0912 23:26:22.273505     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.259708 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:27 old-k8s-version-011723 kubelet[664]: E0912 23:26:27.278136     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260056 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: E0912 23:26:36.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.260247 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:38 old-k8s-version-011723 kubelet[664]: E0912 23:26:38.273832     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260434 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:49 old-k8s-version-011723 kubelet[664]: E0912 23:26:49.273962     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260762 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: E0912 23:26:50.273517     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.260949 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.261277 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.261466 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.261795 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.261982 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.262312 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.262501 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:44.262513 1805825 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:44.262527 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:44.406836 1805825 logs.go:123] Gathering logs for kube-apiserver [5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a] ...
	I0912 23:27:44.406930 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:44.479657 1805825 logs.go:123] Gathering logs for kube-scheduler [e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9] ...
	I0912 23:27:44.479689 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:44.528355 1805825 logs.go:123] Gathering logs for storage-provisioner [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9] ...
	I0912 23:27:44.528385 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:44.590340 1805825 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:44.590373 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:44.607129 1805825 logs.go:123] Gathering logs for kube-apiserver [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b] ...
	I0912 23:27:44.607185 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:44.669500 1805825 logs.go:123] Gathering logs for kube-scheduler [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb] ...
	I0912 23:27:44.669536 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:44.708883 1805825 logs.go:123] Gathering logs for container status ...
	I0912 23:27:44.708912 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:44.757271 1805825 logs.go:123] Gathering logs for etcd [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa] ...
	I0912 23:27:44.757307 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:44.800471 1805825 logs.go:123] Gathering logs for coredns [c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28] ...
	I0912 23:27:44.800505 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:44.845088 1805825 logs.go:123] Gathering logs for kube-proxy [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168] ...
	I0912 23:27:44.845117 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:44.883749 1805825 logs.go:123] Gathering logs for kube-controller-manager [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340] ...
	I0912 23:27:44.883783 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:44.946173 1805825 logs.go:123] Gathering logs for kindnet [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64] ...
	I0912 23:27:44.946213 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:44.987811 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:44.987836 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:44.987883 1805825 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0912 23:27:44.987898 1805825 out.go:270]   Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.987905 1805825 out.go:270]   Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	  Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.987918 1805825 out.go:270]   Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.987925 1805825 out.go:270]   Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	  Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.987931 1805825 out.go:270]   Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:44.987940 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:44.987946 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:54.989447 1805825 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 23:27:55.001137 1805825 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0912 23:27:55.007655 1805825 out.go:201] 
	W0912 23:27:55.012275 1805825 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0912 23:27:55.012327 1805825 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0912 23:27:55.012348 1805825 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0912 23:27:55.012355 1805825 out.go:270] * 
	* 
	W0912 23:27:55.013626 1805825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:27:55.016564 1805825 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-011723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-011723
helpers_test.go:235: (dbg) docker inspect old-k8s-version-011723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055",
	        "Created": "2024-09-12T23:18:38.237870961Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1806034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T23:21:39.525605412Z",
	            "FinishedAt": "2024-09-12T23:21:38.230835917Z"
	        },
	        "Image": "sha256:5a18b2e89815d9320db97822722b50bf88d564940d3d81fe93adf39e9c88570e",
	        "ResolvConfPath": "/var/lib/docker/containers/680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055/hostname",
	        "HostsPath": "/var/lib/docker/containers/680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055/hosts",
	        "LogPath": "/var/lib/docker/containers/680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055/680e0605cc8ab0428c03c4b37b6eb130969bf61696c45168bbe1c95be7f83055-json.log",
	        "Name": "/old-k8s-version-011723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-011723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-011723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f819df6ae33bb3f28c65f8e735bcfe2ea0fced7f0d81898f3c1b046a6ae30d3e-init/diff:/var/lib/docker/overlay2/22619844066f8062a761e6c26d439ab232db1d4015e623ac6dd91ab5ce435ce2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f819df6ae33bb3f28c65f8e735bcfe2ea0fced7f0d81898f3c1b046a6ae30d3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f819df6ae33bb3f28c65f8e735bcfe2ea0fced7f0d81898f3c1b046a6ae30d3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f819df6ae33bb3f28c65f8e735bcfe2ea0fced7f0d81898f3c1b046a6ae30d3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-011723",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-011723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-011723",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-011723",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-011723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f79fdaac47ab11409b6fcd4a815a1b2b4d460ebd57c33ca2543e4e64254bdf47",
	            "SandboxKey": "/var/run/docker/netns/f79fdaac47ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34934"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34935"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34938"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34936"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34937"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-011723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "644d5791a3ff4f65673bd300f36f9d4a0426907ab23cc1635f66562e4668773a",
	                    "EndpointID": "6d05e5ea27fa37b0d2a626f16e5ea289d23de985f33c3db5ae7c26cb8fdd9b53",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-011723",
	                        "680e0605cc8a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-011723 -n old-k8s-version-011723
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-011723 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-011723 logs -n 25: (3.201252001s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-098328                            | force-systemd-env-098328 | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:17 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p pause-693262                                        | pause-693262             | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:17 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-693262                                        | pause-693262             | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:17 UTC |
	| start   | -p cert-expiration-905537                              | cert-expiration-905537   | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:18 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-098328                               | force-systemd-env-098328 | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:17 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-098328                            | force-systemd-env-098328 | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:17 UTC |
	| start   | -p cert-options-713058                                 | cert-options-713058      | jenkins | v1.34.0 | 12 Sep 24 23:17 UTC | 12 Sep 24 23:18 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-713058 ssh                                | cert-options-713058      | jenkins | v1.34.0 | 12 Sep 24 23:18 UTC | 12 Sep 24 23:18 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-713058 -- sudo                         | cert-options-713058      | jenkins | v1.34.0 | 12 Sep 24 23:18 UTC | 12 Sep 24 23:18 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-713058                                 | cert-options-713058      | jenkins | v1.34.0 | 12 Sep 24 23:18 UTC | 12 Sep 24 23:18 UTC |
	| start   | -p old-k8s-version-011723                              | old-k8s-version-011723   | jenkins | v1.34.0 | 12 Sep 24 23:18 UTC | 12 Sep 24 23:21 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-905537                              | cert-expiration-905537   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:21 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-905537                              | cert-expiration-905537   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:21 UTC |
	| start   | -p no-preload-693555                                   | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:22 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-011723        | old-k8s-version-011723   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-011723                              | old-k8s-version-011723   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:21 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-011723             | old-k8s-version-011723   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC | 12 Sep 24 23:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-011723                              | old-k8s-version-011723   | jenkins | v1.34.0 | 12 Sep 24 23:21 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-693555             | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-693555                                   | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-693555                  | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-693555                                   | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:22 UTC | 12 Sep 24 23:27 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-693555 image list                           | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:27 UTC | 12 Sep 24 23:27 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-693555                                   | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:27 UTC | 12 Sep 24 23:27 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-693555                                   | no-preload-693555        | jenkins | v1.34.0 | 12 Sep 24 23:27 UTC | 12 Sep 24 23:27 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 23:22:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 23:22:51.453809 1810706 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:22:51.453987 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:51.453998 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:22:51.454003 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:22:51.454248 1810706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 23:22:51.454620 1810706 out.go:352] Setting JSON to false
	I0912 23:22:51.455796 1810706 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29099,"bootTime":1726154273,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 23:22:51.455873 1810706 start.go:139] virtualization:  
	I0912 23:22:51.459603 1810706 out.go:177] * [no-preload-693555] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 23:22:51.461716 1810706 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:22:51.461747 1810706 notify.go:220] Checking for updates...
	I0912 23:22:51.467608 1810706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:22:51.469595 1810706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:22:51.471425 1810706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 23:22:51.473107 1810706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 23:22:51.474862 1810706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:22:51.477193 1810706 config.go:182] Loaded profile config "no-preload-693555": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:22:51.477759 1810706 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:22:51.505284 1810706 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 23:22:51.505407 1810706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:22:51.598837 1810706 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-12 23:22:51.587213167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:22:51.598959 1810706 docker.go:318] overlay module found
	I0912 23:22:51.601611 1810706 out.go:177] * Using the docker driver based on existing profile
	I0912 23:22:51.603628 1810706 start.go:297] selected driver: docker
	I0912 23:22:51.603656 1810706 start.go:901] validating driver "docker" against &{Name:no-preload-693555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-693555 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:22:51.603880 1810706 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:22:51.604517 1810706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:22:51.662385 1810706 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-12 23:22:51.652413391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:22:51.662729 1810706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:22:51.662756 1810706 cni.go:84] Creating CNI manager for ""
	I0912 23:22:51.662764 1810706 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 23:22:51.662801 1810706 start.go:340] cluster config:
	{Name:no-preload-693555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-693555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:22:51.665814 1810706 out.go:177] * Starting "no-preload-693555" primary control-plane node in "no-preload-693555" cluster
	I0912 23:22:51.667793 1810706 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0912 23:22:51.669728 1810706 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 23:22:51.671763 1810706 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 23:22:51.671812 1810706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 23:22:51.671917 1810706 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/config.json ...
	I0912 23:22:51.672232 1810706 cache.go:107] acquiring lock: {Name:mk014defa35ee2c3d9682c815f21ab018748b9d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672312 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 23:22:51.672321 1810706 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.268µs
	I0912 23:22:51.672334 1810706 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 23:22:51.672344 1810706 cache.go:107] acquiring lock: {Name:mk89d9931b1fbbcee4d1ed196f40307b98f8f572 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672378 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0912 23:22:51.672383 1810706 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 40.263µs
	I0912 23:22:51.672389 1810706 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0912 23:22:51.672398 1810706 cache.go:107] acquiring lock: {Name:mk16427431b2fe3b246ddc0906a6351076417536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672427 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0912 23:22:51.672405 1810706 cache.go:107] acquiring lock: {Name:mkc9a5392f7ddedc8280eb1905f178a47141263e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672449 1810706 cache.go:107] acquiring lock: {Name:mk8d71d4f345a7f47d69dd1cea17d5781c5306d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672482 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0912 23:22:51.672484 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0912 23:22:51.672487 1810706 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 43.922µs
	I0912 23:22:51.672494 1810706 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 95.77µs
	I0912 23:22:51.672498 1810706 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0912 23:22:51.672501 1810706 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0912 23:22:51.672508 1810706 cache.go:107] acquiring lock: {Name:mk922d2ef87e8ea43b12aa0d5c4195ded21fe103 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672513 1810706 cache.go:107] acquiring lock: {Name:mk470ced10fde97f54c66b6c0002c37cf9fae4ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672535 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0912 23:22:51.672541 1810706 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 34.322µs
	I0912 23:22:51.672547 1810706 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0912 23:22:51.672551 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0912 23:22:51.672557 1810706 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 46.063µs
	I0912 23:22:51.672558 1810706 cache.go:107] acquiring lock: {Name:mk3b46bca000c5dd18145f0f882a526f7be17737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.672585 1810706 cache.go:115] /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0912 23:22:51.672590 1810706 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 33.534µs
	I0912 23:22:51.672596 1810706 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0912 23:22:51.672564 1810706 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0912 23:22:51.672432 1810706 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 34.707µs
	I0912 23:22:51.672605 1810706 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0912 23:22:51.672609 1810706 cache.go:87] Successfully saved all images to host disk.
	W0912 23:22:51.690851 1810706 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 is of wrong architecture
	I0912 23:22:51.690874 1810706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 23:22:51.690948 1810706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 23:22:51.690971 1810706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 23:22:51.690979 1810706 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 23:22:51.690988 1810706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 23:22:51.690993 1810706 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 23:22:51.811856 1810706 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 23:22:51.811896 1810706 cache.go:194] Successfully downloaded all kic artifacts
	I0912 23:22:51.811926 1810706 start.go:360] acquireMachinesLock for no-preload-693555: {Name:mk83a5748e0533acea5d0ef167bbf0418e788f96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 23:22:51.812004 1810706 start.go:364] duration metric: took 55.344µs to acquireMachinesLock for "no-preload-693555"
	I0912 23:22:51.812032 1810706 start.go:96] Skipping create...Using existing machine configuration
	I0912 23:22:51.812040 1810706 fix.go:54] fixHost starting: 
	I0912 23:22:51.812317 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:51.828930 1810706 fix.go:112] recreateIfNeeded on no-preload-693555: state=Stopped err=<nil>
	W0912 23:22:51.828963 1810706 fix.go:138] unexpected machine state, will restart: <nil>
	I0912 23:22:51.831151 1810706 out.go:177] * Restarting existing docker container for "no-preload-693555" ...
	I0912 23:22:49.577063 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:51.581169 1805825 pod_ready.go:103] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:52.578988 1805825 pod_ready.go:93] pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace has status "Ready":"True"
	I0912 23:22:52.579010 1805825 pod_ready.go:82] duration metric: took 45.007601623s for pod "coredns-74ff55c5b-lzb66" in "kube-system" namespace to be "Ready" ...
	I0912 23:22:52.579022 1805825 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:22:51.833068 1810706 cli_runner.go:164] Run: docker start no-preload-693555
	I0912 23:22:52.195858 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:52.220809 1810706 kic.go:430] container "no-preload-693555" state is running.
	I0912 23:22:52.221229 1810706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-693555
	I0912 23:22:52.244893 1810706 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/config.json ...
	I0912 23:22:52.245112 1810706 machine.go:93] provisionDockerMachine start ...
	I0912 23:22:52.245726 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:52.278113 1810706 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:52.278397 1810706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34939 <nil> <nil>}
	I0912 23:22:52.278408 1810706 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 23:22:52.279174 1810706 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0912 23:22:55.427579 1810706 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-693555
	
	I0912 23:22:55.427606 1810706 ubuntu.go:169] provisioning hostname "no-preload-693555"
	I0912 23:22:55.427692 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:55.453650 1810706 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:55.453903 1810706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34939 <nil> <nil>}
	I0912 23:22:55.453914 1810706 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-693555 && echo "no-preload-693555" | sudo tee /etc/hostname
	I0912 23:22:55.608206 1810706 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-693555
	
	I0912 23:22:55.608362 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:55.625717 1810706 main.go:141] libmachine: Using SSH client type: native
	I0912 23:22:55.625960 1810706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34939 <nil> <nil>}
	I0912 23:22:55.625977 1810706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-693555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-693555/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-693555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 23:22:55.767807 1810706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 23:22:55.767877 1810706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-1592376/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-1592376/.minikube}
	I0912 23:22:55.767939 1810706 ubuntu.go:177] setting up certificates
	I0912 23:22:55.767969 1810706 provision.go:84] configureAuth start
	I0912 23:22:55.768040 1810706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-693555
	I0912 23:22:55.785381 1810706 provision.go:143] copyHostCerts
	I0912 23:22:55.785454 1810706 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem, removing ...
	I0912 23:22:55.785469 1810706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem
	I0912 23:22:55.785552 1810706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/cert.pem (1123 bytes)
	I0912 23:22:55.785651 1810706 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem, removing ...
	I0912 23:22:55.785660 1810706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem
	I0912 23:22:55.785689 1810706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/key.pem (1675 bytes)
	I0912 23:22:55.785748 1810706 exec_runner.go:144] found /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem, removing ...
	I0912 23:22:55.785758 1810706 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem
	I0912 23:22:55.785782 1810706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.pem (1082 bytes)
	I0912 23:22:55.785834 1810706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem org=jenkins.no-preload-693555 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-693555]
	I0912 23:22:56.181548 1810706 provision.go:177] copyRemoteCerts
	I0912 23:22:56.181650 1810706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 23:22:56.181705 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:56.199409 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:56.300704 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 23:22:56.327776 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0912 23:22:56.353776 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 23:22:56.384513 1810706 provision.go:87] duration metric: took 616.522804ms to configureAuth
	I0912 23:22:56.384545 1810706 ubuntu.go:193] setting minikube options for container-runtime
	I0912 23:22:56.384746 1810706 config.go:182] Loaded profile config "no-preload-693555": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:22:56.384758 1810706 machine.go:96] duration metric: took 4.139638853s to provisionDockerMachine
	I0912 23:22:56.384775 1810706 start.go:293] postStartSetup for "no-preload-693555" (driver="docker")
	I0912 23:22:56.384791 1810706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 23:22:56.384853 1810706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 23:22:56.384896 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:56.400964 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:56.504638 1810706 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 23:22:56.507853 1810706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 23:22:56.507890 1810706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 23:22:56.507901 1810706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 23:22:56.507923 1810706 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 23:22:56.507939 1810706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/addons for local assets ...
	I0912 23:22:56.508008 1810706 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1592376/.minikube/files for local assets ...
	I0912 23:22:56.508105 1810706 filesync.go:149] local asset: /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem -> 15977602.pem in /etc/ssl/certs
	I0912 23:22:56.508212 1810706 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 23:22:56.516938 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem --> /etc/ssl/certs/15977602.pem (1708 bytes)
	I0912 23:22:56.542989 1810706 start.go:296] duration metric: took 158.190727ms for postStartSetup
	I0912 23:22:56.543082 1810706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 23:22:56.543125 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:56.559653 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:56.661331 1810706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 23:22:56.667125 1810706 fix.go:56] duration metric: took 4.855077246s for fixHost
	I0912 23:22:56.667154 1810706 start.go:83] releasing machines lock for "no-preload-693555", held for 4.855134985s
	I0912 23:22:56.667225 1810706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-693555
	I0912 23:22:56.686867 1810706 ssh_runner.go:195] Run: cat /version.json
	I0912 23:22:56.686911 1810706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 23:22:56.686921 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:56.686983 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:56.707380 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:56.717307 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:56.803050 1810706 ssh_runner.go:195] Run: systemctl --version
	I0912 23:22:56.954158 1810706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 23:22:56.958596 1810706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 23:22:56.977516 1810706 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 23:22:56.977599 1810706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 23:22:56.987753 1810706 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 23:22:56.987776 1810706 start.go:495] detecting cgroup driver to use...
	I0912 23:22:56.987810 1810706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 23:22:56.987864 1810706 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 23:22:57.002542 1810706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 23:22:57.017803 1810706 docker.go:217] disabling cri-docker service (if available) ...
	I0912 23:22:57.017926 1810706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 23:22:57.032638 1810706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 23:22:57.046220 1810706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 23:22:57.140476 1810706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 23:22:57.233077 1810706 docker.go:233] disabling docker service ...
	I0912 23:22:57.233149 1810706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 23:22:57.246352 1810706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 23:22:57.257996 1810706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 23:22:57.338244 1810706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 23:22:57.425250 1810706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 23:22:57.436535 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 23:22:57.453477 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 23:22:57.464991 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 23:22:57.476481 1810706 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 23:22:57.476570 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 23:22:57.487308 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 23:22:57.497948 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 23:22:57.509828 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 23:22:57.521662 1810706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 23:22:57.534918 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 23:22:57.547237 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 23:22:57.559317 1810706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 23:22:57.571782 1810706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 23:22:57.585116 1810706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 23:22:57.596006 1810706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:22:57.703561 1810706 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 23:22:57.869424 1810706 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0912 23:22:57.869606 1810706 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 23:22:57.873492 1810706 start.go:563] Will wait 60s for crictl version
	I0912 23:22:57.873595 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:22:57.876893 1810706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 23:22:57.918166 1810706 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0912 23:22:57.918304 1810706 ssh_runner.go:195] Run: containerd --version
	I0912 23:22:57.943529 1810706 ssh_runner.go:195] Run: containerd --version
	I0912 23:22:57.975237 1810706 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0912 23:22:54.585950 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:57.086582 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:22:57.976892 1810706 cli_runner.go:164] Run: docker network inspect no-preload-693555 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 23:22:57.993326 1810706 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0912 23:22:57.996997 1810706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:22:58.012986 1810706 kubeadm.go:883] updating cluster {Name:no-preload-693555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-693555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 23:22:58.013122 1810706 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 23:22:58.013187 1810706 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 23:22:58.060738 1810706 containerd.go:627] all images are preloaded for containerd runtime.
	I0912 23:22:58.060765 1810706 cache_images.go:84] Images are preloaded, skipping loading
	I0912 23:22:58.060774 1810706 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0912 23:22:58.060885 1810706 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-693555 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-693555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 23:22:58.060958 1810706 ssh_runner.go:195] Run: sudo crictl info
	I0912 23:22:58.109622 1810706 cni.go:84] Creating CNI manager for ""
	I0912 23:22:58.109648 1810706 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 23:22:58.109658 1810706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 23:22:58.109684 1810706 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-693555 NodeName:no-preload-693555 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 23:22:58.109833 1810706 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-693555"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 23:22:58.109910 1810706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 23:22:58.121017 1810706 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 23:22:58.121094 1810706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 23:22:58.130450 1810706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0912 23:22:58.149974 1810706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 23:22:58.170029 1810706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0912 23:22:58.188937 1810706 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0912 23:22:58.192685 1810706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 23:22:58.203296 1810706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:22:58.302170 1810706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:22:58.323682 1810706 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555 for IP: 192.168.85.2
	I0912 23:22:58.323840 1810706 certs.go:194] generating shared ca certs ...
	I0912 23:22:58.323860 1810706 certs.go:226] acquiring lock for ca certs: {Name:mk5b7cca91a053f0ec1ca9c487c600f7eefaa6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:58.324032 1810706 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key
	I0912 23:22:58.324074 1810706 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key
	I0912 23:22:58.324081 1810706 certs.go:256] generating profile certs ...
	I0912 23:22:58.324171 1810706 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.key
	I0912 23:22:58.324239 1810706 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/apiserver.key.ce120de9
	I0912 23:22:58.324276 1810706 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/proxy-client.key
	I0912 23:22:58.324388 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760.pem (1338 bytes)
	W0912 23:22:58.324420 1810706 certs.go:480] ignoring /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760_empty.pem, impossibly tiny 0 bytes
	I0912 23:22:58.324429 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca-key.pem (1679 bytes)
	I0912 23:22:58.324455 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/ca.pem (1082 bytes)
	I0912 23:22:58.324477 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/cert.pem (1123 bytes)
	I0912 23:22:58.324501 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/key.pem (1675 bytes)
	I0912 23:22:58.324543 1810706 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem (1708 bytes)
	I0912 23:22:58.325163 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 23:22:58.388708 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 23:22:58.437687 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 23:22:58.492352 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 23:22:58.534444 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0912 23:22:58.564689 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 23:22:58.605553 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 23:22:58.640648 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 23:22:58.669168 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 23:22:58.706342 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/certs/1597760.pem --> /usr/share/ca-certificates/1597760.pem (1338 bytes)
	I0912 23:22:58.734618 1810706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/ssl/certs/15977602.pem --> /usr/share/ca-certificates/15977602.pem (1708 bytes)
	I0912 23:22:58.768729 1810706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 23:22:58.790433 1810706 ssh_runner.go:195] Run: openssl version
	I0912 23:22:58.797896 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 23:22:58.809289 1810706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:22:58.812970 1810706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 22:30 /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:22:58.813082 1810706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 23:22:58.820305 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 23:22:58.830162 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597760.pem && ln -fs /usr/share/ca-certificates/1597760.pem /etc/ssl/certs/1597760.pem"
	I0912 23:22:58.840024 1810706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597760.pem
	I0912 23:22:58.844058 1810706 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 12 22:41 /usr/share/ca-certificates/1597760.pem
	I0912 23:22:58.844158 1810706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597760.pem
	I0912 23:22:58.851506 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597760.pem /etc/ssl/certs/51391683.0"
	I0912 23:22:58.861100 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15977602.pem && ln -fs /usr/share/ca-certificates/15977602.pem /etc/ssl/certs/15977602.pem"
	I0912 23:22:58.871304 1810706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15977602.pem
	I0912 23:22:58.874928 1810706 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 12 22:41 /usr/share/ca-certificates/15977602.pem
	I0912 23:22:58.875044 1810706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15977602.pem
	I0912 23:22:58.881919 1810706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15977602.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 23:22:58.891627 1810706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 23:22:58.895790 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 23:22:58.902805 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 23:22:58.909937 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 23:22:58.917237 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 23:22:58.925840 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 23:22:58.932971 1810706 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 23:22:58.939952 1810706 kubeadm.go:392] StartCluster: {Name:no-preload-693555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-693555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 23:22:58.940053 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0912 23:22:58.940151 1810706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 23:22:58.994689 1810706 cri.go:89] found id: "3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:22:58.994713 1810706 cri.go:89] found id: "fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:22:58.994718 1810706 cri.go:89] found id: "f9d27646d9b2033dac6adc77f4beabfa2310b96018de7536884911301bcc8d10"
	I0912 23:22:58.994731 1810706 cri.go:89] found id: "28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:22:58.994735 1810706 cri.go:89] found id: "ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:22:58.994778 1810706 cri.go:89] found id: "05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:22:58.994784 1810706 cri.go:89] found id: "9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:22:58.994792 1810706 cri.go:89] found id: "f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:22:58.994796 1810706 cri.go:89] found id: ""
	I0912 23:22:58.994869 1810706 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0912 23:22:59.010379 1810706 cri.go:116] JSON = null
	W0912 23:22:59.010456 1810706 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0912 23:22:59.010572 1810706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 23:22:59.022750 1810706 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0912 23:22:59.022773 1810706 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0912 23:22:59.022860 1810706 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 23:22:59.032653 1810706 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 23:22:59.033274 1810706 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-693555" does not appear in /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:22:59.033550 1810706 kubeconfig.go:62] /home/jenkins/minikube-integration/19616-1592376/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-693555" cluster setting kubeconfig missing "no-preload-693555" context setting]
	I0912 23:22:59.034051 1810706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/kubeconfig: {Name:mk20814b10c438de6fa8214738e210df331cf1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:59.035445 1810706 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 23:22:59.045786 1810706 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0912 23:22:59.045818 1810706 kubeadm.go:597] duration metric: took 23.038977ms to restartPrimaryControlPlane
	I0912 23:22:59.045828 1810706 kubeadm.go:394] duration metric: took 105.887251ms to StartCluster
	I0912 23:22:59.045843 1810706 settings.go:142] acquiring lock: {Name:mk1fdbbc4ffc0e3fc6419399beeda4839e1c5a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:59.045907 1810706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:22:59.046822 1810706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/kubeconfig: {Name:mk20814b10c438de6fa8214738e210df331cf1df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 23:22:59.047023 1810706 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 23:22:59.047323 1810706 config.go:182] Loaded profile config "no-preload-693555": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:22:59.047366 1810706 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0912 23:22:59.047429 1810706 addons.go:69] Setting storage-provisioner=true in profile "no-preload-693555"
	I0912 23:22:59.047451 1810706 addons.go:234] Setting addon storage-provisioner=true in "no-preload-693555"
	W0912 23:22:59.047459 1810706 addons.go:243] addon storage-provisioner should already be in state true
	I0912 23:22:59.047496 1810706 host.go:66] Checking if "no-preload-693555" exists ...
	I0912 23:22:59.047971 1810706 addons.go:69] Setting default-storageclass=true in profile "no-preload-693555"
	I0912 23:22:59.048003 1810706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-693555"
	I0912 23:22:59.048242 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:59.048468 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:59.048645 1810706 addons.go:69] Setting dashboard=true in profile "no-preload-693555"
	I0912 23:22:59.048684 1810706 addons.go:234] Setting addon dashboard=true in "no-preload-693555"
	W0912 23:22:59.048692 1810706 addons.go:243] addon dashboard should already be in state true
	I0912 23:22:59.048717 1810706 host.go:66] Checking if "no-preload-693555" exists ...
	I0912 23:22:59.049148 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:59.049472 1810706 addons.go:69] Setting metrics-server=true in profile "no-preload-693555"
	I0912 23:22:59.049512 1810706 addons.go:234] Setting addon metrics-server=true in "no-preload-693555"
	W0912 23:22:59.049523 1810706 addons.go:243] addon metrics-server should already be in state true
	I0912 23:22:59.049546 1810706 host.go:66] Checking if "no-preload-693555" exists ...
	I0912 23:22:59.049943 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:59.051651 1810706 out.go:177] * Verifying Kubernetes components...
	I0912 23:22:59.064899 1810706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 23:22:59.096432 1810706 addons.go:234] Setting addon default-storageclass=true in "no-preload-693555"
	W0912 23:22:59.096455 1810706 addons.go:243] addon default-storageclass should already be in state true
	I0912 23:22:59.096481 1810706 host.go:66] Checking if "no-preload-693555" exists ...
	I0912 23:22:59.096892 1810706 cli_runner.go:164] Run: docker container inspect no-preload-693555 --format={{.State.Status}}
	I0912 23:22:59.124148 1810706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 23:22:59.127377 1810706 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0912 23:22:59.127515 1810706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:22:59.127535 1810706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 23:22:59.127612 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:59.131059 1810706 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0912 23:22:59.133340 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0912 23:22:59.133362 1810706 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0912 23:22:59.133437 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:59.138774 1810706 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0912 23:22:59.140436 1810706 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 23:22:59.140458 1810706 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 23:22:59.140521 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:59.172016 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:59.189044 1810706 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 23:22:59.189066 1810706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 23:22:59.189131 1810706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-693555
	I0912 23:22:59.218563 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:59.218563 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:59.232303 1810706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34939 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/no-preload-693555/id_rsa Username:docker}
	I0912 23:22:59.289187 1810706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 23:22:59.362366 1810706 node_ready.go:35] waiting up to 6m0s for node "no-preload-693555" to be "Ready" ...
	I0912 23:22:59.448900 1810706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:22:59.515863 1810706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 23:22:59.545756 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0912 23:22:59.545819 1810706 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0912 23:22:59.566590 1810706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 23:22:59.566653 1810706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0912 23:22:59.661905 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0912 23:22:59.661972 1810706 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0912 23:22:59.713327 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0912 23:22:59.713392 1810706 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0912 23:22:59.719510 1810706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 23:22:59.719586 1810706 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	W0912 23:22:59.816412 1810706 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0912 23:22:59.816509 1810706 retry.go:31] will retry after 192.006509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0912 23:22:59.865116 1810706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:22:59.865190 1810706 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 23:22:59.877483 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0912 23:22:59.877546 1810706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0912 23:22:59.988504 1810706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 23:23:00.009000 1810706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 23:23:00.181415 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0912 23:23:00.181526 1810706 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0912 23:23:00.392965 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0912 23:23:00.393059 1810706 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0912 23:23:00.549069 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0912 23:23:00.549142 1810706 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0912 23:23:00.620149 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0912 23:23:00.620215 1810706 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0912 23:23:00.651416 1810706 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0912 23:23:00.651492 1810706 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0912 23:23:00.672406 1810706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0912 23:22:59.091309 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:01.584645 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:03.584803 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:05.042480 1810706 node_ready.go:49] node "no-preload-693555" has status "Ready":"True"
	I0912 23:23:05.042507 1810706 node_ready.go:38] duration metric: took 5.680102929s for node "no-preload-693555" to be "Ready" ...
	I0912 23:23:05.042518 1810706 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:23:05.129210 1810706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zhsgq" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.149598 1810706 pod_ready.go:93] pod "coredns-7c65d6cfc9-zhsgq" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.149680 1810706 pod_ready.go:82] duration metric: took 20.371481ms for pod "coredns-7c65d6cfc9-zhsgq" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.149713 1810706 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.236142 1810706 pod_ready.go:93] pod "etcd-no-preload-693555" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.236216 1810706 pod_ready.go:82] duration metric: took 86.464149ms for pod "etcd-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.236246 1810706 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.254516 1810706 pod_ready.go:93] pod "kube-apiserver-no-preload-693555" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.254580 1810706 pod_ready.go:82] duration metric: took 18.312812ms for pod "kube-apiserver-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.254607 1810706 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.267222 1810706 pod_ready.go:93] pod "kube-controller-manager-no-preload-693555" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.267295 1810706 pod_ready.go:82] duration metric: took 12.664871ms for pod "kube-controller-manager-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.267322 1810706 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h54sf" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.285301 1810706 pod_ready.go:93] pod "kube-proxy-h54sf" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.285369 1810706 pod_ready.go:82] duration metric: took 18.024666ms for pod "kube-proxy-h54sf" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.285395 1810706 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.355270 1810706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.8393172s)
	I0912 23:23:05.673652 1810706 pod_ready.go:93] pod "kube-scheduler-no-preload-693555" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:05.673725 1810706 pod_ready.go:82] duration metric: took 388.309548ms for pod "kube-scheduler-no-preload-693555" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:05.673753 1810706 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:07.680189 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:08.473015 1810706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.4639097s)
	I0912 23:23:08.473486 1810706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.484891977s)
	I0912 23:23:08.473512 1810706 addons.go:475] Verifying addon metrics-server=true in "no-preload-693555"
	I0912 23:23:08.700405 1810706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.027881949s)
	I0912 23:23:08.702476 1810706 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-693555 addons enable metrics-server
	
	I0912 23:23:08.704720 1810706 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0912 23:23:05.586373 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:07.703232 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:08.706616 1810706 addons.go:510] duration metric: took 9.659243972s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0912 23:23:10.181098 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:10.092109 1805825 pod_ready.go:103] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:12.586070 1805825 pod_ready.go:93] pod "etcd-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:12.586091 1805825 pod_ready.go:82] duration metric: took 20.007061143s for pod "etcd-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.586105 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.592877 1805825 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:12.592899 1805825 pod_ready.go:82] duration metric: took 6.785661ms for pod "kube-apiserver-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.592910 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:12.181154 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:14.680773 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:14.599464 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:16.600010 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:18.600119 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:16.682604 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:19.182046 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:21.099631 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:23.598727 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:21.679775 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:24.180626 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:25.599494 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:28.099402 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:26.680250 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:28.680704 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:31.179841 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:30.108642 1805825 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:31.599271 1805825 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.599300 1805825 pod_ready.go:82] duration metric: took 19.006382161s for pod "kube-controller-manager-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.599314 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cd4m4" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.604530 1805825 pod_ready.go:93] pod "kube-proxy-cd4m4" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.604558 1805825 pod_ready.go:82] duration metric: took 5.236086ms for pod "kube-proxy-cd4m4" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.604570 1805825 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.609948 1805825 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace has status "Ready":"True"
	I0912 23:23:31.609978 1805825 pod_ready.go:82] duration metric: took 5.399884ms for pod "kube-scheduler-old-k8s-version-011723" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:31.609991 1805825 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace to be "Ready" ...
	I0912 23:23:33.616901 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:33.179905 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:35.680343 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:36.117406 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:38.616767 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:37.680376 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:39.680631 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:41.116618 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:43.117109 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:42.190548 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:44.680359 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:45.119017 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:47.615873 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:46.680774 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:49.180421 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:49.616264 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:51.616446 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:51.680668 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:54.179787 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:54.117042 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:56.617982 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:58.618639 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:56.680208 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:23:59.179818 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:01.180358 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:01.116490 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:03.615956 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:03.180509 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:05.680239 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:05.616291 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:07.616383 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:08.181629 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:10.679963 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:10.118971 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:12.615907 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:13.179823 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:15.180839 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:14.616327 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:16.616660 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:18.618315 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:17.680094 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:20.180382 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:21.116721 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:23.120405 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:22.180445 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:24.679939 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:25.615987 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:27.616285 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:26.682501 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:29.180183 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:31.180914 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:29.616736 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:32.116368 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:33.681606 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:36.179960 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:34.615597 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:36.615875 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:38.682405 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:41.180329 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:39.116734 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:41.616349 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:43.616829 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:43.680256 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:46.179519 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:46.116401 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:48.116475 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:48.180762 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:50.682900 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:50.116632 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:52.616460 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:53.179834 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:55.180251 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:55.116563 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:57.116939 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:57.181131 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:59.680834 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:24:59.615938 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:01.616962 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:01.680955 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:04.179816 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:06.180030 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:04.117033 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:06.615106 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:08.697132 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:08.682424 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:11.180620 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:11.117036 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:13.615834 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:13.680229 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:16.180488 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:15.616644 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:18.118163 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:18.680183 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:20.680361 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:20.616069 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:23.116443 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:22.680550 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:24.680781 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:25.118089 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:27.618412 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:27.179666 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:29.179790 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:31.179979 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:30.118003 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:32.615831 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:33.180600 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:35.679984 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:34.616187 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:36.616671 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:38.179228 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:40.180625 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:39.116087 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:41.615833 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:43.615886 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:42.186160 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:44.679386 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:45.616059 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:48.116800 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:46.680020 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:48.681153 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:51.179809 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:50.616480 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:53.115634 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:53.679299 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:55.685772 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:55.117262 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:57.615811 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:58.180525 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:00.197848 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:25:59.617089 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:01.617125 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:02.679455 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:04.679633 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:04.116539 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:06.615639 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:08.616084 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:06.680215 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:09.179840 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:11.180247 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:10.616278 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:13.116566 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:13.180819 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:15.181478 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:15.118162 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:17.616104 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:17.184108 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:19.679471 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:19.616529 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:21.617541 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:21.681674 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:24.180390 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:24.116954 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:26.618594 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:26.680710 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:28.681929 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:31.184219 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:29.135672 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:31.616184 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:33.680097 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:36.180273 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:34.117097 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:36.616730 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:38.680425 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:41.179918 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:39.115929 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:41.116884 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:43.116963 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:43.681011 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:46.179639 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:45.118983 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:47.617481 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:48.179957 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:50.181948 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:49.623855 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:52.116732 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:52.680196 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:54.680946 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:54.125647 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:56.615873 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:58.625134 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:57.179843 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:26:59.183687 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:01.117368 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:03.615687 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:01.680977 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:03.682557 1810706 pod_ready.go:103] pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:05.680117 1810706 pod_ready.go:82] duration metric: took 4m0.006337168s for pod "metrics-server-6867b74b74-x8xjw" in "kube-system" namespace to be "Ready" ...
	E0912 23:27:05.680146 1810706 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:27:05.680156 1810706 pod_ready.go:39] duration metric: took 4m0.637627117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:27:05.680172 1810706 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:27:05.680202 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:05.680269 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:05.726303 1810706 cri.go:89] found id: "477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:05.726326 1810706 cri.go:89] found id: "ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:05.726331 1810706 cri.go:89] found id: ""
	I0912 23:27:05.726339 1810706 logs.go:276] 2 containers: [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54]
	I0912 23:27:05.726401 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.730131 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.733979 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:05.734054 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:05.775503 1810706 cri.go:89] found id: "b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:05.775532 1810706 cri.go:89] found id: "f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:05.775538 1810706 cri.go:89] found id: ""
	I0912 23:27:05.775546 1810706 logs.go:276] 2 containers: [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a]
	I0912 23:27:05.775605 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.779430 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.782776 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:05.782853 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:05.840609 1810706 cri.go:89] found id: "48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:05.840632 1810706 cri.go:89] found id: "3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:05.840637 1810706 cri.go:89] found id: ""
	I0912 23:27:05.840644 1810706 logs.go:276] 2 containers: [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570]
	I0912 23:27:05.840715 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.844577 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.848252 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:05.848324 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:05.887710 1810706 cri.go:89] found id: "071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:05.887731 1810706 cri.go:89] found id: "9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:05.887738 1810706 cri.go:89] found id: ""
	I0912 23:27:05.887745 1810706 logs.go:276] 2 containers: [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee]
	I0912 23:27:05.887803 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.891539 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.895373 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:05.895450 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:05.932511 1810706 cri.go:89] found id: "5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:05.932535 1810706 cri.go:89] found id: "28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:05.932540 1810706 cri.go:89] found id: ""
	I0912 23:27:05.932548 1810706 logs.go:276] 2 containers: [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40]
	I0912 23:27:05.932612 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.936823 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.940579 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:05.940663 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:05.981506 1810706 cri.go:89] found id: "a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:05.981539 1810706 cri.go:89] found id: "05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:05.981545 1810706 cri.go:89] found id: ""
	I0912 23:27:05.981571 1810706 logs.go:276] 2 containers: [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205]
	I0912 23:27:05.981649 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.985360 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:05.988907 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:05.988986 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:06.032771 1810706 cri.go:89] found id: "ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:06.032841 1810706 cri.go:89] found id: "fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:06.032853 1810706 cri.go:89] found id: ""
	I0912 23:27:06.032861 1810706 logs.go:276] 2 containers: [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00]
	I0912 23:27:06.032929 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:06.037328 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:06.041115 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:06.041198 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:06.086630 1810706 cri.go:89] found id: "78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:06.086652 1810706 cri.go:89] found id: ""
	I0912 23:27:06.086661 1810706 logs.go:276] 1 containers: [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91]
	I0912 23:27:06.086729 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:06.090826 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:06.090950 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:06.139503 1810706 cri.go:89] found id: "4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:06.139577 1810706 cri.go:89] found id: "3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:06.139596 1810706 cri.go:89] found id: ""
	I0912 23:27:06.139622 1810706 logs.go:276] 2 containers: [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a]
	I0912 23:27:06.139783 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:06.143781 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:06.147594 1810706 logs.go:123] Gathering logs for kube-apiserver [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9] ...
	I0912 23:27:06.147674 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:06.228468 1810706 logs.go:123] Gathering logs for kube-apiserver [ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54] ...
	I0912 23:27:06.228504 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:06.296104 1810706 logs.go:123] Gathering logs for coredns [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11] ...
	I0912 23:27:06.296138 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:06.349591 1810706 logs.go:123] Gathering logs for kube-scheduler [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc] ...
	I0912 23:27:06.349619 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:06.390184 1810706 logs.go:123] Gathering logs for kube-proxy [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c] ...
	I0912 23:27:06.390213 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:06.441081 1810706 logs.go:123] Gathering logs for kube-proxy [28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40] ...
	I0912 23:27:06.441148 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:05.616841 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:07.617150 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:06.478957 1810706 logs.go:123] Gathering logs for kindnet [fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00] ...
	I0912 23:27:06.478986 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:06.517107 1810706 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:06.517132 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:06.693138 1810706 logs.go:123] Gathering logs for storage-provisioner [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba] ...
	I0912 23:27:06.693172 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:06.734434 1810706 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:06.734463 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:06.754206 1810706 logs.go:123] Gathering logs for kube-controller-manager [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a] ...
	I0912 23:27:06.754238 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:06.826622 1810706 logs.go:123] Gathering logs for kindnet [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde] ...
	I0912 23:27:06.826654 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:06.873063 1810706 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:06.873097 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:06.936656 1810706 logs.go:123] Gathering logs for container status ...
	I0912 23:27:06.936694 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:06.983454 1810706 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:06.983483 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:07.041658 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:07.041945 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:07.073178 1810706 logs.go:123] Gathering logs for etcd [f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a] ...
	I0912 23:27:07.073218 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:07.147535 1810706 logs.go:123] Gathering logs for kubernetes-dashboard [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91] ...
	I0912 23:27:07.147567 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:07.193591 1810706 logs.go:123] Gathering logs for storage-provisioner [3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a] ...
	I0912 23:27:07.193617 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:07.237149 1810706 logs.go:123] Gathering logs for etcd [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904] ...
	I0912 23:27:07.237187 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:07.297207 1810706 logs.go:123] Gathering logs for kube-scheduler [9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee] ...
	I0912 23:27:07.297294 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:07.353274 1810706 logs.go:123] Gathering logs for kube-controller-manager [05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205] ...
	I0912 23:27:07.353307 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:07.416648 1810706 logs.go:123] Gathering logs for coredns [3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570] ...
	I0912 23:27:07.416742 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:07.481538 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:07.481565 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:07.481643 1810706 out.go:270] X Problems detected in kubelet:
	W0912 23:27:07.481656 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:07.481692 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:07.481706 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:07.481713 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:10.117503 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:12.617018 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:15.117143 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:17.120989 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:17.482665 1810706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:27:17.494575 1810706 api_server.go:72] duration metric: took 4m18.447514875s to wait for apiserver process to appear ...
	I0912 23:27:17.494599 1810706 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:27:17.494633 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:17.494689 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:17.534406 1810706 cri.go:89] found id: "477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:17.534426 1810706 cri.go:89] found id: "ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:17.534431 1810706 cri.go:89] found id: ""
	I0912 23:27:17.534438 1810706 logs.go:276] 2 containers: [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54]
	I0912 23:27:17.534496 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.538209 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.541568 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:17.541642 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:17.609834 1810706 cri.go:89] found id: "b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:17.609857 1810706 cri.go:89] found id: "f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:17.609863 1810706 cri.go:89] found id: ""
	I0912 23:27:17.609870 1810706 logs.go:276] 2 containers: [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a]
	I0912 23:27:17.609932 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.619360 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.623359 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:17.623474 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:17.668082 1810706 cri.go:89] found id: "48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:17.668105 1810706 cri.go:89] found id: "3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:17.668110 1810706 cri.go:89] found id: ""
	I0912 23:27:17.668126 1810706 logs.go:276] 2 containers: [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570]
	I0912 23:27:17.668215 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.671997 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.675487 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:17.675579 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:17.713506 1810706 cri.go:89] found id: "071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:17.713530 1810706 cri.go:89] found id: "9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:17.713535 1810706 cri.go:89] found id: ""
	I0912 23:27:17.713543 1810706 logs.go:276] 2 containers: [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee]
	I0912 23:27:17.713622 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.717566 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.721146 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:17.721222 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:17.761779 1810706 cri.go:89] found id: "5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:17.761845 1810706 cri.go:89] found id: "28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:17.761873 1810706 cri.go:89] found id: ""
	I0912 23:27:17.761885 1810706 logs.go:276] 2 containers: [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40]
	I0912 23:27:17.761955 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.765654 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.769275 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:17.769392 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:17.817079 1810706 cri.go:89] found id: "a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:17.817151 1810706 cri.go:89] found id: "05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:17.817170 1810706 cri.go:89] found id: ""
	I0912 23:27:17.817195 1810706 logs.go:276] 2 containers: [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205]
	I0912 23:27:17.817283 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.823255 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.826746 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:17.826819 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:17.868225 1810706 cri.go:89] found id: "ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:17.868248 1810706 cri.go:89] found id: "fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:17.868253 1810706 cri.go:89] found id: ""
	I0912 23:27:17.868259 1810706 logs.go:276] 2 containers: [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00]
	I0912 23:27:17.868347 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.872534 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.876080 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:17.876176 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:17.912671 1810706 cri.go:89] found id: "4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:17.912696 1810706 cri.go:89] found id: "3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:17.912701 1810706 cri.go:89] found id: ""
	I0912 23:27:17.912709 1810706 logs.go:276] 2 containers: [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a]
	I0912 23:27:17.912770 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.916682 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.920305 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:17.920380 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:17.959796 1810706 cri.go:89] found id: "78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:17.959819 1810706 cri.go:89] found id: ""
	I0912 23:27:17.959827 1810706 logs.go:276] 1 containers: [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91]
	I0912 23:27:17.959898 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:17.963592 1810706 logs.go:123] Gathering logs for etcd [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904] ...
	I0912 23:27:17.963617 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:18.008283 1810706 logs.go:123] Gathering logs for kube-controller-manager [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a] ...
	I0912 23:27:18.008381 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:18.111207 1810706 logs.go:123] Gathering logs for coredns [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11] ...
	I0912 23:27:18.111293 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:18.163000 1810706 logs.go:123] Gathering logs for kube-scheduler [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc] ...
	I0912 23:27:18.163027 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:18.205560 1810706 logs.go:123] Gathering logs for kube-proxy [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c] ...
	I0912 23:27:18.205590 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:18.244956 1810706 logs.go:123] Gathering logs for kube-controller-manager [05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205] ...
	I0912 23:27:18.245032 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:18.303877 1810706 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:18.303909 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:18.321794 1810706 logs.go:123] Gathering logs for kube-apiserver [ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54] ...
	I0912 23:27:18.321826 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:18.369856 1810706 logs.go:123] Gathering logs for kube-proxy [28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40] ...
	I0912 23:27:18.369890 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:18.441349 1810706 logs.go:123] Gathering logs for storage-provisioner [3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a] ...
	I0912 23:27:18.441399 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:18.485739 1810706 logs.go:123] Gathering logs for kube-apiserver [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9] ...
	I0912 23:27:18.485767 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:18.551483 1810706 logs.go:123] Gathering logs for kube-scheduler [9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee] ...
	I0912 23:27:18.551516 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:18.602046 1810706 logs.go:123] Gathering logs for etcd [f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a] ...
	I0912 23:27:18.602082 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:18.669708 1810706 logs.go:123] Gathering logs for coredns [3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570] ...
	I0912 23:27:18.669742 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:18.713002 1810706 logs.go:123] Gathering logs for kindnet [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde] ...
	I0912 23:27:18.713032 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:18.771074 1810706 logs.go:123] Gathering logs for kindnet [fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00] ...
	I0912 23:27:18.771103 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:18.814427 1810706 logs.go:123] Gathering logs for storage-provisioner [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba] ...
	I0912 23:27:18.814454 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:18.860354 1810706 logs.go:123] Gathering logs for kubernetes-dashboard [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91] ...
	I0912 23:27:18.860384 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:18.903812 1810706 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:18.903842 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:18.947070 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:18.947335 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:18.979330 1810706 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:18.979364 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:19.122952 1810706 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:19.122984 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:19.190451 1810706 logs.go:123] Gathering logs for container status ...
	I0912 23:27:19.190491 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:19.245314 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:19.245341 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:19.245427 1810706 out.go:270] X Problems detected in kubelet:
	W0912 23:27:19.245444 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:19.245563 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:19.245578 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:19.245585 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:19.616374 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:22.117190 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:24.615625 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:26.615774 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:28.619296 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:29.246993 1810706 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0912 23:27:29.255600 1810706 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0912 23:27:29.256623 1810706 api_server.go:141] control plane version: v1.31.1
	I0912 23:27:29.256661 1810706 api_server.go:131] duration metric: took 11.76205474s to wait for apiserver health ...
	I0912 23:27:29.256670 1810706 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 23:27:29.256695 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:29.256763 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:29.306724 1810706 cri.go:89] found id: "477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:29.306745 1810706 cri.go:89] found id: "ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:29.306751 1810706 cri.go:89] found id: ""
	I0912 23:27:29.306759 1810706 logs.go:276] 2 containers: [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54]
	I0912 23:27:29.306818 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.310799 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.314570 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:29.314642 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:29.352451 1810706 cri.go:89] found id: "b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:29.352478 1810706 cri.go:89] found id: "f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:29.352484 1810706 cri.go:89] found id: ""
	I0912 23:27:29.352498 1810706 logs.go:276] 2 containers: [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a]
	I0912 23:27:29.352568 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.356513 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.360125 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:29.360217 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:29.401465 1810706 cri.go:89] found id: "48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:29.401486 1810706 cri.go:89] found id: "3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:29.401492 1810706 cri.go:89] found id: ""
	I0912 23:27:29.401499 1810706 logs.go:276] 2 containers: [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570]
	I0912 23:27:29.401565 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.405380 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.408713 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:29.408781 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:29.456275 1810706 cri.go:89] found id: "071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:29.456296 1810706 cri.go:89] found id: "9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:29.456301 1810706 cri.go:89] found id: ""
	I0912 23:27:29.456309 1810706 logs.go:276] 2 containers: [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee]
	I0912 23:27:29.456371 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.460070 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.463656 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:29.463771 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:29.515367 1810706 cri.go:89] found id: "5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:29.515388 1810706 cri.go:89] found id: "28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:29.515393 1810706 cri.go:89] found id: ""
	I0912 23:27:29.515408 1810706 logs.go:276] 2 containers: [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40]
	I0912 23:27:29.515466 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.519312 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.525754 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:29.525827 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:29.578003 1810706 cri.go:89] found id: "a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:29.578028 1810706 cri.go:89] found id: "05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:29.578033 1810706 cri.go:89] found id: ""
	I0912 23:27:29.578040 1810706 logs.go:276] 2 containers: [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205]
	I0912 23:27:29.578104 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.582129 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.585660 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:29.585762 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:29.631134 1810706 cri.go:89] found id: "ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:29.631162 1810706 cri.go:89] found id: "fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:29.631168 1810706 cri.go:89] found id: ""
	I0912 23:27:29.631175 1810706 logs.go:276] 2 containers: [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00]
	I0912 23:27:29.631261 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.634759 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.637962 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:29.638029 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:29.681998 1810706 cri.go:89] found id: "78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:29.682021 1810706 cri.go:89] found id: ""
	I0912 23:27:29.682030 1810706 logs.go:276] 1 containers: [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91]
	I0912 23:27:29.682092 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.687430 1810706 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:29.687516 1810706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:29.733473 1810706 cri.go:89] found id: "4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:29.733506 1810706 cri.go:89] found id: "3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:29.733511 1810706 cri.go:89] found id: ""
	I0912 23:27:29.733518 1810706 logs.go:276] 2 containers: [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a]
	I0912 23:27:29.733595 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.737362 1810706 ssh_runner.go:195] Run: which crictl
	I0912 23:27:29.741236 1810706 logs.go:123] Gathering logs for kube-controller-manager [05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205] ...
	I0912 23:27:29.741261 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05d21ce7223f303ab629c852afd0c23d3bd83ddc7f5e525b19eafcd518d90205"
	I0912 23:27:29.804495 1810706 logs.go:123] Gathering logs for kindnet [ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde] ...
	I0912 23:27:29.804531 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad43bfb6f08066e2b2b459f86e327a1828a7098f4276f8a976c8c428da2e0cde"
	I0912 23:27:29.851503 1810706 logs.go:123] Gathering logs for storage-provisioner [4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba] ...
	I0912 23:27:29.851533 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be7edac801c048e4da6a8f0ebaa4d578467a01e9db5b68dd58cb1db560298ba"
	I0912 23:27:29.891526 1810706 logs.go:123] Gathering logs for coredns [3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570] ...
	I0912 23:27:29.891598 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3476d1c345d735c195eff6f0e379a19ce6de55dc9031b47cbb8c3846995b3570"
	I0912 23:27:29.938568 1810706 logs.go:123] Gathering logs for kube-scheduler [071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc] ...
	I0912 23:27:29.938646 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 071450c5d56030c924328604ca60161b7d7d98433dd7345d90f9572494b82dfc"
	I0912 23:27:29.991944 1810706 logs.go:123] Gathering logs for kube-apiserver [477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9] ...
	I0912 23:27:29.992025 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 477e1b27d6244de5ae6a64237e7e4cf2909e20d681e3a1995c8958dec44ac0d9"
	I0912 23:27:30.078375 1810706 logs.go:123] Gathering logs for coredns [48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11] ...
	I0912 23:27:30.078465 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48782a6da8f4b7c0cd71ed667e0936b18b356230e449a6a4fd8e70476aaafc11"
	I0912 23:27:30.149583 1810706 logs.go:123] Gathering logs for kube-proxy [28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40] ...
	I0912 23:27:30.149623 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28433c93e2874843ba88312299226db85cf2925d42c4198ced81f444cc7c5e40"
	I0912 23:27:30.212709 1810706 logs.go:123] Gathering logs for kindnet [fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00] ...
	I0912 23:27:30.212748 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdd0e8bfd032b19b7d03a25e135b60184cfb1ef8aef841ccf80addc3c98a4a00"
	I0912 23:27:30.255121 1810706 logs.go:123] Gathering logs for kubernetes-dashboard [78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91] ...
	I0912 23:27:30.255212 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78034ebb53280de477b2a646a33dfa53044f1449734af88bb23d4704b42a3b91"
	I0912 23:27:30.308015 1810706 logs.go:123] Gathering logs for storage-provisioner [3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a] ...
	I0912 23:27:30.308089 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ff147d7cfc903e15eb338718acc2f0b024bbfa87a8593495c3b7ea4c0fac87a"
	I0912 23:27:30.352283 1810706 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:30.352307 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:30.401232 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:30.401513 1810706 logs.go:138] Found kubelet problem: Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:30.434451 1810706 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:30.434488 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:30.566368 1810706 logs.go:123] Gathering logs for container status ...
	I0912 23:27:30.566398 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:30.623446 1810706 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:30.623476 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:30.691082 1810706 logs.go:123] Gathering logs for etcd [b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904] ...
	I0912 23:27:30.691115 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5c67eeae67f566a3e85ad69f208260f5d872989b10777b786b23b67f5d79904"
	I0912 23:27:30.739119 1810706 logs.go:123] Gathering logs for kube-scheduler [9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee] ...
	I0912 23:27:30.739148 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e667a77ee1ed22d157b6759c8ddbf053e25b962646044aafe484dfc1f60b8ee"
	I0912 23:27:30.794388 1810706 logs.go:123] Gathering logs for etcd [f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a] ...
	I0912 23:27:30.794428 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f8905636a6533439b8b3a5fa972466a779c617744979f0d9408405553d59ad5a"
	I0912 23:27:30.844807 1810706 logs.go:123] Gathering logs for kube-proxy [5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c] ...
	I0912 23:27:30.844838 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c4a6b9252768278b934e06df32e955004d314e150c2f5983264c8897b73c08c"
	I0912 23:27:30.885817 1810706 logs.go:123] Gathering logs for kube-controller-manager [a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a] ...
	I0912 23:27:30.885846 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8e3a456f0b4b10613af9107fb417b040b7b6a9f9dd664e224687dec8702388a"
	I0912 23:27:30.968069 1810706 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:30.968102 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:30.985663 1810706 logs.go:123] Gathering logs for kube-apiserver [ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54] ...
	I0912 23:27:30.985694 1810706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffd8e004c279749df62bdc8177862941341c4f58331b2a372e0f822b6ec26c54"
	I0912 23:27:31.051395 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:31.051429 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:31.051495 1810706 out.go:270] X Problems detected in kubelet:
	W0912 23:27:31.051512 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: W0912 23:23:08.516515     660 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-693555" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-693555' and this object
	W0912 23:27:31.051520 1810706 out.go:270]   Sep 12 23:23:08 no-preload-693555 kubelet[660]: E0912 23:23:08.516568     660 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-693555\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-693555' and this object" logger="UnhandledError"
	I0912 23:27:31.051532 1810706 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:31.051538 1810706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:30.620004 1805825 pod_ready.go:103] pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace has status "Ready":"False"
	I0912 23:27:31.616410 1805825 pod_ready.go:82] duration metric: took 4m0.006405602s for pod "metrics-server-9975d5f86-gklxg" in "kube-system" namespace to be "Ready" ...
	E0912 23:27:31.616439 1805825 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0912 23:27:31.616449 1805825 pod_ready.go:39] duration metric: took 5m24.135588484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 23:27:31.616463 1805825 api_server.go:52] waiting for apiserver process to appear ...
	I0912 23:27:31.616491 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:31.616558 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:31.656235 1805825 cri.go:89] found id: "e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:31.656258 1805825 cri.go:89] found id: "5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:31.656263 1805825 cri.go:89] found id: ""
	I0912 23:27:31.656270 1805825 logs.go:276] 2 containers: [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a]
	I0912 23:27:31.656357 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.659874 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.663300 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:31.663382 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:31.701173 1805825 cri.go:89] found id: "acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:31.701208 1805825 cri.go:89] found id: "f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:31.701213 1805825 cri.go:89] found id: ""
	I0912 23:27:31.701224 1805825 logs.go:276] 2 containers: [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab]
	I0912 23:27:31.701387 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.704920 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.712336 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:31.712435 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:31.749195 1805825 cri.go:89] found id: "761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:31.749217 1805825 cri.go:89] found id: "c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:31.749222 1805825 cri.go:89] found id: ""
	I0912 23:27:31.749229 1805825 logs.go:276] 2 containers: [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28]
	I0912 23:27:31.749310 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.753118 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.756544 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:31.756618 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:31.795469 1805825 cri.go:89] found id: "ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:31.795545 1805825 cri.go:89] found id: "e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:31.795564 1805825 cri.go:89] found id: ""
	I0912 23:27:31.795588 1805825 logs.go:276] 2 containers: [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9]
	I0912 23:27:31.795669 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.799094 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.802491 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:31.802601 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:31.839752 1805825 cri.go:89] found id: "94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:31.839776 1805825 cri.go:89] found id: "f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:31.839781 1805825 cri.go:89] found id: ""
	I0912 23:27:31.839789 1805825 logs.go:276] 2 containers: [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71]
	I0912 23:27:31.839873 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.843319 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.846600 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:31.846683 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:31.887977 1805825 cri.go:89] found id: "e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:31.888039 1805825 cri.go:89] found id: "ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:31.888060 1805825 cri.go:89] found id: ""
	I0912 23:27:31.888089 1805825 logs.go:276] 2 containers: [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99]
	I0912 23:27:31.888195 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.891769 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.895304 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:31.895389 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:31.933166 1805825 cri.go:89] found id: "f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:31.933244 1805825 cri.go:89] found id: "2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:31.933257 1805825 cri.go:89] found id: ""
	I0912 23:27:31.933266 1805825 logs.go:276] 2 containers: [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade]
	I0912 23:27:31.933335 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.936945 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.940365 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:31.940485 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:31.981027 1805825 cri.go:89] found id: "3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:31.981107 1805825 cri.go:89] found id: ""
	I0912 23:27:31.981131 1805825 logs.go:276] 1 containers: [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8]
	I0912 23:27:31.981229 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:31.985423 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:31.985539 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:32.033051 1805825 cri.go:89] found id: "1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:32.033072 1805825 cri.go:89] found id: "91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:32.033077 1805825 cri.go:89] found id: ""
	I0912 23:27:32.033084 1805825 logs.go:276] 2 containers: [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299]
	I0912 23:27:32.033166 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:32.036867 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:32.040391 1805825 logs.go:123] Gathering logs for kube-proxy [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168] ...
	I0912 23:27:32.040424 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:32.086447 1805825 logs.go:123] Gathering logs for container status ...
	I0912 23:27:32.086476 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:32.136905 1805825 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:32.137006 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:32.190946 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411026     664 reflector.go:138] object-"kube-system"/"coredns-token-m2js8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2js8" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191190 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411278     664 reflector.go:138] object-"kube-system"/"kindnet-token-xzbvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xzbvw" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191408 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411446     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411664     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-k7dwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-k7dwz" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.191851 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411960     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192082 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414071     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7cln": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7cln" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192298 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414335     664 reflector.go:138] object-"default"/"default-token-bxtgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bxtgn" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.192523 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414403     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fdc76": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fdc76" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:32.199031 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.557235     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.200921 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.840628     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.205210 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.466661     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.206993 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.472436     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.208452 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.366884     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.210201 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.463324     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.212088 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.478847     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.213664 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.482852     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.214501 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.566317     664 pod_workers.go:191] Error syncing pod 7a260c8b-3e99-476d-bb2a-f42a54017c50 ("busybox_default(7a260c8b-3e99-476d-bb2a-f42a54017c50)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:32.217039 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.591024     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.217368 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:12 old-k8s-version-011723 kubelet[664]: E0912 23:22:12.488949     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.222297 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:34 old-k8s-version-011723 kubelet[664]: E0912 23:22:34.530645     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.223276 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:36 old-k8s-version-011723 kubelet[664]: E0912 23:22:36.632305     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.223614 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:37 old-k8s-version-011723 kubelet[664]: E0912 23:22:37.632313     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.223957 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:44 old-k8s-version-011723 kubelet[664]: E0912 23:22:44.683335     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.224486 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:47 old-k8s-version-011723 kubelet[664]: E0912 23:22:47.274232     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.225089 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:58 old-k8s-version-011723 kubelet[664]: E0912 23:22:58.684667     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.227589 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:00 old-k8s-version-011723 kubelet[664]: E0912 23:23:00.296836     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.227925 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:04 old-k8s-version-011723 kubelet[664]: E0912 23:23:04.683877     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.228115 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:12 old-k8s-version-011723 kubelet[664]: E0912 23:23:12.274275     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.228446 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:16 old-k8s-version-011723 kubelet[664]: E0912 23:23:16.273507     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.228639 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:23 old-k8s-version-011723 kubelet[664]: E0912 23:23:23.274327     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.229234 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:29 old-k8s-version-011723 kubelet[664]: E0912 23:23:29.784506     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.229569 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:34 old-k8s-version-011723 kubelet[664]: E0912 23:23:34.683510     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.229761 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:38 old-k8s-version-011723 kubelet[664]: E0912 23:23:38.273957     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.230108 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:50 old-k8s-version-011723 kubelet[664]: E0912 23:23:50.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.232628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:52 old-k8s-version-011723 kubelet[664]: E0912 23:23:52.283800     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.232968 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:01 old-k8s-version-011723 kubelet[664]: E0912 23:24:01.274256     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.233157 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:04 old-k8s-version-011723 kubelet[664]: E0912 23:24:04.274131     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.233766 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:14 old-k8s-version-011723 kubelet[664]: E0912 23:24:14.908668     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.233955 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:15 old-k8s-version-011723 kubelet[664]: E0912 23:24:15.282101     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.234302 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:24 old-k8s-version-011723 kubelet[664]: E0912 23:24:24.683372     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.234497 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:27 old-k8s-version-011723 kubelet[664]: E0912 23:24:27.273932     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.234835 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:40 old-k8s-version-011723 kubelet[664]: E0912 23:24:40.274451     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.235026 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:42 old-k8s-version-011723 kubelet[664]: E0912 23:24:42.274046     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.235222 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:53 old-k8s-version-011723 kubelet[664]: E0912 23:24:53.273989     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.235556 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:54 old-k8s-version-011723 kubelet[664]: E0912 23:24:54.273546     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.235753 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:04 old-k8s-version-011723 kubelet[664]: E0912 23:25:04.273901     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.236095 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:05 old-k8s-version-011723 kubelet[664]: E0912 23:25:05.274360     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.238700 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:15 old-k8s-version-011723 kubelet[664]: E0912 23:25:15.305120     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:32.239058 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:19 old-k8s-version-011723 kubelet[664]: E0912 23:25:19.274155     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.239305 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:28 old-k8s-version-011723 kubelet[664]: E0912 23:25:28.273983     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.239641 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:30 old-k8s-version-011723 kubelet[664]: E0912 23:25:30.273912     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.239839 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:39 old-k8s-version-011723 kubelet[664]: E0912 23:25:39.277598     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.240445 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:43 old-k8s-version-011723 kubelet[664]: E0912 23:25:43.182848     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.240777 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:44 old-k8s-version-011723 kubelet[664]: E0912 23:25:44.683303     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.240968 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:51 old-k8s-version-011723 kubelet[664]: E0912 23:25:51.278776     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.241306 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:58 old-k8s-version-011723 kubelet[664]: E0912 23:25:58.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.241496 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:03 old-k8s-version-011723 kubelet[664]: E0912 23:26:03.278105     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.241833 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:10 old-k8s-version-011723 kubelet[664]: E0912 23:26:10.273581     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.242022 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:14 old-k8s-version-011723 kubelet[664]: E0912 23:26:14.273888     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.242356 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: E0912 23:26:22.273505     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.242547 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:27 old-k8s-version-011723 kubelet[664]: E0912 23:26:27.278136     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.242883 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: E0912 23:26:36.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.243073 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:38 old-k8s-version-011723 kubelet[664]: E0912 23:26:38.273832     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.243267 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:49 old-k8s-version-011723 kubelet[664]: E0912 23:26:49.273962     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.243600 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: E0912 23:26:50.273517     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.243797 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.244128 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.244318 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:32.244672 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:32.244861 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:32.244873 1805825 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:32.244888 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:32.263536 1805825 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:32.263566 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:32.418067 1805825 logs.go:123] Gathering logs for kube-apiserver [5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a] ...
	I0912 23:27:32.418100 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:32.472888 1805825 logs.go:123] Gathering logs for etcd [f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab] ...
	I0912 23:27:32.472923 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:32.521854 1805825 logs.go:123] Gathering logs for kube-scheduler [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb] ...
	I0912 23:27:32.521887 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:32.574517 1805825 logs.go:123] Gathering logs for kube-apiserver [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b] ...
	I0912 23:27:32.574547 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:32.634129 1805825 logs.go:123] Gathering logs for coredns [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623] ...
	I0912 23:27:32.634165 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:32.680668 1805825 logs.go:123] Gathering logs for kindnet [2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade] ...
	I0912 23:27:32.680696 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:32.718806 1805825 logs.go:123] Gathering logs for storage-provisioner [91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299] ...
	I0912 23:27:32.718833 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:32.764018 1805825 logs.go:123] Gathering logs for etcd [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa] ...
	I0912 23:27:32.764051 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:32.815016 1805825 logs.go:123] Gathering logs for coredns [c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28] ...
	I0912 23:27:32.815050 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:32.860482 1805825 logs.go:123] Gathering logs for kube-scheduler [e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9] ...
	I0912 23:27:32.860511 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:32.903923 1805825 logs.go:123] Gathering logs for kube-proxy [f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71] ...
	I0912 23:27:32.903952 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:32.947569 1805825 logs.go:123] Gathering logs for kube-controller-manager [ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99] ...
	I0912 23:27:32.947601 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:33.028599 1805825 logs.go:123] Gathering logs for kindnet [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64] ...
	I0912 23:27:33.028643 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:33.070980 1805825 logs.go:123] Gathering logs for kube-controller-manager [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340] ...
	I0912 23:27:33.071011 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:33.133140 1805825 logs.go:123] Gathering logs for kubernetes-dashboard [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8] ...
	I0912 23:27:33.133176 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:33.185246 1805825 logs.go:123] Gathering logs for storage-provisioner [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9] ...
	I0912 23:27:33.185279 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:33.225304 1805825 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:33.225338 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:33.298930 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:33.298966 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:33.299025 1805825 out.go:270] X Problems detected in kubelet:
	W0912 23:27:33.299037 1805825 out.go:270]   Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:33.299044 1805825 out.go:270]   Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:33.299070 1805825 out.go:270]   Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:33.299077 1805825 out.go:270]   Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:33.299086 1805825 out.go:270]   Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:33.299091 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:33.299097 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:41.059097 1810706 system_pods.go:59] 9 kube-system pods found
	I0912 23:27:41.059200 1810706 system_pods.go:61] "coredns-7c65d6cfc9-zhsgq" [16bb9cf4-891d-43c6-b489-4ad94da02766] Running
	I0912 23:27:41.059234 1810706 system_pods.go:61] "etcd-no-preload-693555" [20ab9e86-a8a5-4c71-8aec-f7df68bad75c] Running
	I0912 23:27:41.059267 1810706 system_pods.go:61] "kindnet-nlh64" [9a37b725-19a3-4280-a4cc-ec8d214d4d0d] Running
	I0912 23:27:41.059293 1810706 system_pods.go:61] "kube-apiserver-no-preload-693555" [c656a837-6f11-4d71-82aa-5229b39df334] Running
	I0912 23:27:41.059324 1810706 system_pods.go:61] "kube-controller-manager-no-preload-693555" [db016c36-dc2e-4876-975d-8f69679984a4] Running
	I0912 23:27:41.059345 1810706 system_pods.go:61] "kube-proxy-h54sf" [9f64daeb-5f5d-4af9-9e19-3dcf5c01eeee] Running
	I0912 23:27:41.059377 1810706 system_pods.go:61] "kube-scheduler-no-preload-693555" [6c0e71df-95f0-40a0-b482-5ade3bfec475] Running
	I0912 23:27:41.059409 1810706 system_pods.go:61] "metrics-server-6867b74b74-x8xjw" [5a501205-94ed-41dd-bb33-337bf9ecb022] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:27:41.059436 1810706 system_pods.go:61] "storage-provisioner" [39689ad0-9762-4536-b68d-03470dc67833] Running
	I0912 23:27:41.059457 1810706 system_pods.go:74] duration metric: took 11.802780963s to wait for pod list to return data ...
	I0912 23:27:41.059499 1810706 default_sa.go:34] waiting for default service account to be created ...
	I0912 23:27:41.062561 1810706 default_sa.go:45] found service account: "default"
	I0912 23:27:41.062587 1810706 default_sa.go:55] duration metric: took 3.058705ms for default service account to be created ...
	I0912 23:27:41.062599 1810706 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 23:27:41.068921 1810706 system_pods.go:86] 9 kube-system pods found
	I0912 23:27:41.069003 1810706 system_pods.go:89] "coredns-7c65d6cfc9-zhsgq" [16bb9cf4-891d-43c6-b489-4ad94da02766] Running
	I0912 23:27:41.069027 1810706 system_pods.go:89] "etcd-no-preload-693555" [20ab9e86-a8a5-4c71-8aec-f7df68bad75c] Running
	I0912 23:27:41.069050 1810706 system_pods.go:89] "kindnet-nlh64" [9a37b725-19a3-4280-a4cc-ec8d214d4d0d] Running
	I0912 23:27:41.069085 1810706 system_pods.go:89] "kube-apiserver-no-preload-693555" [c656a837-6f11-4d71-82aa-5229b39df334] Running
	I0912 23:27:41.069113 1810706 system_pods.go:89] "kube-controller-manager-no-preload-693555" [db016c36-dc2e-4876-975d-8f69679984a4] Running
	I0912 23:27:41.069136 1810706 system_pods.go:89] "kube-proxy-h54sf" [9f64daeb-5f5d-4af9-9e19-3dcf5c01eeee] Running
	I0912 23:27:41.069159 1810706 system_pods.go:89] "kube-scheduler-no-preload-693555" [6c0e71df-95f0-40a0-b482-5ade3bfec475] Running
	I0912 23:27:41.069197 1810706 system_pods.go:89] "metrics-server-6867b74b74-x8xjw" [5a501205-94ed-41dd-bb33-337bf9ecb022] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 23:27:41.069225 1810706 system_pods.go:89] "storage-provisioner" [39689ad0-9762-4536-b68d-03470dc67833] Running
	I0912 23:27:41.069252 1810706 system_pods.go:126] duration metric: took 6.646378ms to wait for k8s-apps to be running ...
	I0912 23:27:41.069273 1810706 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 23:27:41.069361 1810706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:27:41.085075 1810706 system_svc.go:56] duration metric: took 15.792129ms WaitForService to wait for kubelet
	I0912 23:27:41.085101 1810706 kubeadm.go:582] duration metric: took 4m42.038044796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 23:27:41.085123 1810706 node_conditions.go:102] verifying NodePressure condition ...
	I0912 23:27:41.088919 1810706 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0912 23:27:41.088954 1810706 node_conditions.go:123] node cpu capacity is 2
	I0912 23:27:41.088967 1810706 node_conditions.go:105] duration metric: took 3.838018ms to run NodePressure ...
	I0912 23:27:41.088980 1810706 start.go:241] waiting for startup goroutines ...
	I0912 23:27:41.088988 1810706 start.go:246] waiting for cluster config update ...
	I0912 23:27:41.088999 1810706 start.go:255] writing updated cluster config ...
	I0912 23:27:41.089318 1810706 ssh_runner.go:195] Run: rm -f paused
	I0912 23:27:41.153623 1810706 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 23:27:41.156052 1810706 out.go:177] * Done! kubectl is now configured to use "no-preload-693555" cluster and "default" namespace by default
	I0912 23:27:43.301335 1805825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:27:43.313968 1805825 api_server.go:72] duration metric: took 5m55.797654969s to wait for apiserver process to appear ...
	I0912 23:27:43.313998 1805825 api_server.go:88] waiting for apiserver healthz status ...
	I0912 23:27:43.314038 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0912 23:27:43.314095 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 23:27:43.353269 1805825 cri.go:89] found id: "e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:43.353292 1805825 cri.go:89] found id: "5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:43.353297 1805825 cri.go:89] found id: ""
	I0912 23:27:43.353305 1805825 logs.go:276] 2 containers: [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a]
	I0912 23:27:43.353363 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.357194 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.360691 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0912 23:27:43.360764 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 23:27:43.409115 1805825 cri.go:89] found id: "acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:43.409135 1805825 cri.go:89] found id: "f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:43.409140 1805825 cri.go:89] found id: ""
	I0912 23:27:43.409148 1805825 logs.go:276] 2 containers: [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab]
	I0912 23:27:43.409205 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.414385 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.418038 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0912 23:27:43.418103 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 23:27:43.470267 1805825 cri.go:89] found id: "761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:43.470287 1805825 cri.go:89] found id: "c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:43.470292 1805825 cri.go:89] found id: ""
	I0912 23:27:43.470299 1805825 logs.go:276] 2 containers: [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28]
	I0912 23:27:43.470361 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.474528 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.478036 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0912 23:27:43.478114 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 23:27:43.520737 1805825 cri.go:89] found id: "ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:43.520766 1805825 cri.go:89] found id: "e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:43.520773 1805825 cri.go:89] found id: ""
	I0912 23:27:43.520780 1805825 logs.go:276] 2 containers: [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9]
	I0912 23:27:43.520840 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.524564 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.528472 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0912 23:27:43.528544 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 23:27:43.575684 1805825 cri.go:89] found id: "94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:43.575760 1805825 cri.go:89] found id: "f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:43.575767 1805825 cri.go:89] found id: ""
	I0912 23:27:43.575775 1805825 logs.go:276] 2 containers: [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71]
	I0912 23:27:43.575853 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.579573 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.582968 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 23:27:43.583040 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 23:27:43.622197 1805825 cri.go:89] found id: "e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:43.622232 1805825 cri.go:89] found id: "ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:43.622242 1805825 cri.go:89] found id: ""
	I0912 23:27:43.622267 1805825 logs.go:276] 2 containers: [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99]
	I0912 23:27:43.622346 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.625917 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.629579 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0912 23:27:43.629657 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 23:27:43.672137 1805825 cri.go:89] found id: "f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:43.672160 1805825 cri.go:89] found id: "2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:43.672166 1805825 cri.go:89] found id: ""
	I0912 23:27:43.672174 1805825 logs.go:276] 2 containers: [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade]
	I0912 23:27:43.672232 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.675927 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.680672 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0912 23:27:43.680740 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0912 23:27:43.718009 1805825 cri.go:89] found id: "3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:43.718034 1805825 cri.go:89] found id: ""
	I0912 23:27:43.718042 1805825 logs.go:276] 1 containers: [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8]
	I0912 23:27:43.718099 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.721672 1805825 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0912 23:27:43.721750 1805825 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 23:27:43.759833 1805825 cri.go:89] found id: "1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:43.759855 1805825 cri.go:89] found id: "91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:43.759860 1805825 cri.go:89] found id: ""
	I0912 23:27:43.759867 1805825 logs.go:276] 2 containers: [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299]
	I0912 23:27:43.759939 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.763575 1805825 ssh_runner.go:195] Run: which crictl
	I0912 23:27:43.767207 1805825 logs.go:123] Gathering logs for kubernetes-dashboard [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8] ...
	I0912 23:27:43.767286 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8"
	I0912 23:27:43.812189 1805825 logs.go:123] Gathering logs for etcd [f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab] ...
	I0912 23:27:43.812262 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab"
	I0912 23:27:43.860473 1805825 logs.go:123] Gathering logs for coredns [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623] ...
	I0912 23:27:43.860501 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623"
	I0912 23:27:43.901572 1805825 logs.go:123] Gathering logs for kube-proxy [f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71] ...
	I0912 23:27:43.901599 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71"
	I0912 23:27:43.941592 1805825 logs.go:123] Gathering logs for kube-controller-manager [ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99] ...
	I0912 23:27:43.941623 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99"
	I0912 23:27:44.000715 1805825 logs.go:123] Gathering logs for kindnet [2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade] ...
	I0912 23:27:44.000754 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade"
	I0912 23:27:44.049375 1805825 logs.go:123] Gathering logs for storage-provisioner [91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299] ...
	I0912 23:27:44.049402 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299"
	I0912 23:27:44.095108 1805825 logs.go:123] Gathering logs for containerd ...
	I0912 23:27:44.095139 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0912 23:27:44.155926 1805825 logs.go:123] Gathering logs for kubelet ...
	I0912 23:27:44.155961 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0912 23:27:44.207364 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411026     664 reflector.go:138] object-"kube-system"/"coredns-token-m2js8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-m2js8" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.207620 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411278     664 reflector.go:138] object-"kube-system"/"kindnet-token-xzbvw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xzbvw" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.207833 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411446     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208051 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411664     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-k7dwz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-k7dwz" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208258 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.411960     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208485 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414071     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7cln": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7cln" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208697 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414335     664 reflector.go:138] object-"default"/"default-token-bxtgn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bxtgn" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.208921 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:07 old-k8s-version-011723 kubelet[664]: E0912 23:22:07.414403     664 reflector.go:138] object-"kube-system"/"metrics-server-token-fdc76": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fdc76" is forbidden: User "system:node:old-k8s-version-011723" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-011723' and this object
	W0912 23:27:44.215368 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.557235     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.217198 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:09 old-k8s-version-011723 kubelet[664]: E0912 23:22:09.840628     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.221475 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.466661     664 pod_workers.go:191] Error syncing pod e7e03576-7399-4fcf-8ab1-dfe79e82e9bc ("storage-provisioner_kube-system(e7e03576-7399-4fcf-8ab1-dfe79e82e9bc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.223249 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:10 old-k8s-version-011723 kubelet[664]: E0912 23:22:10.472436     664 pod_workers.go:191] Error syncing pod 5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc ("kindnet-rdqkd_kube-system(5cabfb63-8e1c-4bbb-9965-e00fc5de8bbc)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.224648 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.366884     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.226290 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.463324     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.228181 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.478847     664 pod_workers.go:191] Error syncing pod 3b3346e8-fe2e-46fc-8cd9-21264698e11a ("coredns-74ff55c5b-lzb66_kube-system(3b3346e8-fe2e-46fc-8cd9-21264698e11a)"), skipping: failed to "StartContainer" for "coredns" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.229717 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.482852     664 pod_workers.go:191] Error syncing pod c2d2a606-20fe-4e2b-a1c0-ba5741b38145 ("kube-proxy-cd4m4_kube-system(c2d2a606-20fe-4e2b-a1c0-ba5741b38145)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.230538 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.566317     664 pod_workers.go:191] Error syncing pod 7a260c8b-3e99-476d-bb2a-f42a54017c50 ("busybox_default(7a260c8b-3e99-476d-bb2a-f42a54017c50)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0912 23:27:44.233034 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:11 old-k8s-version-011723 kubelet[664]: E0912 23:22:11.591024     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.233356 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:12 old-k8s-version-011723 kubelet[664]: E0912 23:22:12.488949     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.238182 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:34 old-k8s-version-011723 kubelet[664]: E0912 23:22:34.530645     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.239114 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:36 old-k8s-version-011723 kubelet[664]: E0912 23:22:36.632305     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.239454 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:37 old-k8s-version-011723 kubelet[664]: E0912 23:22:37.632313     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.239792 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:44 old-k8s-version-011723 kubelet[664]: E0912 23:22:44.683335     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.240321 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:47 old-k8s-version-011723 kubelet[664]: E0912 23:22:47.274232     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.240914 1805825 logs.go:138] Found kubelet problem: Sep 12 23:22:58 old-k8s-version-011723 kubelet[664]: E0912 23:22:58.684667     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.243409 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:00 old-k8s-version-011723 kubelet[664]: E0912 23:23:00.296836     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.243745 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:04 old-k8s-version-011723 kubelet[664]: E0912 23:23:04.683877     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.243932 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:12 old-k8s-version-011723 kubelet[664]: E0912 23:23:12.274275     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.244259 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:16 old-k8s-version-011723 kubelet[664]: E0912 23:23:16.273507     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.244448 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:23 old-k8s-version-011723 kubelet[664]: E0912 23:23:23.274327     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.245036 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:29 old-k8s-version-011723 kubelet[664]: E0912 23:23:29.784506     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.245365 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:34 old-k8s-version-011723 kubelet[664]: E0912 23:23:34.683510     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.245553 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:38 old-k8s-version-011723 kubelet[664]: E0912 23:23:38.273957     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.245882 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:50 old-k8s-version-011723 kubelet[664]: E0912 23:23:50.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.248381 1805825 logs.go:138] Found kubelet problem: Sep 12 23:23:52 old-k8s-version-011723 kubelet[664]: E0912 23:23:52.283800     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.248855 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:01 old-k8s-version-011723 kubelet[664]: E0912 23:24:01.274256     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.250523 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:04 old-k8s-version-011723 kubelet[664]: E0912 23:24:04.274131     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.251141 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:14 old-k8s-version-011723 kubelet[664]: E0912 23:24:14.908668     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.251337 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:15 old-k8s-version-011723 kubelet[664]: E0912 23:24:15.282101     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.251674 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:24 old-k8s-version-011723 kubelet[664]: E0912 23:24:24.683372     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.251898 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:27 old-k8s-version-011723 kubelet[664]: E0912 23:24:27.273932     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252232 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:40 old-k8s-version-011723 kubelet[664]: E0912 23:24:40.274451     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.252419 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:42 old-k8s-version-011723 kubelet[664]: E0912 23:24:42.274046     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252606 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:53 old-k8s-version-011723 kubelet[664]: E0912 23:24:53.273989     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.252934 1805825 logs.go:138] Found kubelet problem: Sep 12 23:24:54 old-k8s-version-011723 kubelet[664]: E0912 23:24:54.273546     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.253120 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:04 old-k8s-version-011723 kubelet[664]: E0912 23:25:04.273901     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.253473 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:05 old-k8s-version-011723 kubelet[664]: E0912 23:25:05.274360     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.255953 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:15 old-k8s-version-011723 kubelet[664]: E0912 23:25:15.305120     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0912 23:27:44.256289 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:19 old-k8s-version-011723 kubelet[664]: E0912 23:25:19.274155     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.256481 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:28 old-k8s-version-011723 kubelet[664]: E0912 23:25:28.273983     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.256812 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:30 old-k8s-version-011723 kubelet[664]: E0912 23:25:30.273912     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.256999 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:39 old-k8s-version-011723 kubelet[664]: E0912 23:25:39.277598     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.257586 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:43 old-k8s-version-011723 kubelet[664]: E0912 23:25:43.182848     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.257913 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:44 old-k8s-version-011723 kubelet[664]: E0912 23:25:44.683303     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.258101 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:51 old-k8s-version-011723 kubelet[664]: E0912 23:25:51.278776     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.258443 1805825 logs.go:138] Found kubelet problem: Sep 12 23:25:58 old-k8s-version-011723 kubelet[664]: E0912 23:25:58.273502     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.258628 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:03 old-k8s-version-011723 kubelet[664]: E0912 23:26:03.278105     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.258985 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:10 old-k8s-version-011723 kubelet[664]: E0912 23:26:10.273581     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.259180 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:14 old-k8s-version-011723 kubelet[664]: E0912 23:26:14.273888     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.259511 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: E0912 23:26:22.273505     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.259708 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:27 old-k8s-version-011723 kubelet[664]: E0912 23:26:27.278136     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260056 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: E0912 23:26:36.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.260247 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:38 old-k8s-version-011723 kubelet[664]: E0912 23:26:38.273832     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260434 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:49 old-k8s-version-011723 kubelet[664]: E0912 23:26:49.273962     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.260762 1805825 logs.go:138] Found kubelet problem: Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: E0912 23:26:50.273517     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.260949 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.261277 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.261466 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.261795 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.261982 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.262312 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.262501 1805825 logs.go:138] Found kubelet problem: Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:44.262513 1805825 logs.go:123] Gathering logs for describe nodes ...
	I0912 23:27:44.262527 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 23:27:44.406836 1805825 logs.go:123] Gathering logs for kube-apiserver [5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a] ...
	I0912 23:27:44.406930 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a"
	I0912 23:27:44.479657 1805825 logs.go:123] Gathering logs for kube-scheduler [e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9] ...
	I0912 23:27:44.479689 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9"
	I0912 23:27:44.528355 1805825 logs.go:123] Gathering logs for storage-provisioner [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9] ...
	I0912 23:27:44.528385 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9"
	I0912 23:27:44.590340 1805825 logs.go:123] Gathering logs for dmesg ...
	I0912 23:27:44.590373 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 23:27:44.607129 1805825 logs.go:123] Gathering logs for kube-apiserver [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b] ...
	I0912 23:27:44.607185 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b"
	I0912 23:27:44.669500 1805825 logs.go:123] Gathering logs for kube-scheduler [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb] ...
	I0912 23:27:44.669536 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb"
	I0912 23:27:44.708883 1805825 logs.go:123] Gathering logs for container status ...
	I0912 23:27:44.708912 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 23:27:44.757271 1805825 logs.go:123] Gathering logs for etcd [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa] ...
	I0912 23:27:44.757307 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa"
	I0912 23:27:44.800471 1805825 logs.go:123] Gathering logs for coredns [c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28] ...
	I0912 23:27:44.800505 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28"
	I0912 23:27:44.845088 1805825 logs.go:123] Gathering logs for kube-proxy [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168] ...
	I0912 23:27:44.845117 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168"
	I0912 23:27:44.883749 1805825 logs.go:123] Gathering logs for kube-controller-manager [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340] ...
	I0912 23:27:44.883783 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340"
	I0912 23:27:44.946173 1805825 logs.go:123] Gathering logs for kindnet [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64] ...
	I0912 23:27:44.946213 1805825 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64"
	I0912 23:27:44.987811 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:44.987836 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0912 23:27:44.987883 1805825 out.go:270] X Problems detected in kubelet:
	W0912 23:27:44.987898 1805825 out.go:270]   Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.987905 1805825 out.go:270]   Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.987918 1805825 out.go:270]   Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0912 23:27:44.987925 1805825 out.go:270]   Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	W0912 23:27:44.987931 1805825 out.go:270]   Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0912 23:27:44.987940 1805825 out.go:358] Setting ErrFile to fd 2...
	I0912 23:27:44.987946 1805825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:27:54.989447 1805825 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 23:27:55.001137 1805825 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0912 23:27:55.007655 1805825 out.go:201] 
	W0912 23:27:55.012275 1805825 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0912 23:27:55.012327 1805825 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0912 23:27:55.012348 1805825 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0912 23:27:55.012355 1805825 out.go:270] * 
	W0912 23:27:55.013626 1805825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 23:27:55.016564 1805825 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	23ddd9c911f9a       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8536b8402b347       dashboard-metrics-scraper-8d5bb5db8-gxw94
	3bea0dd674f7b       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   e6fd6729758f7       kubernetes-dashboard-cd95d586-nbkcq
	f6ce1f61d9cdc       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   e2e9a7894e9a9       kindnet-rdqkd
	1b307b6cba68d       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   3b4f3f0b9a7ea       storage-provisioner
	94347fb65e2d7       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   a89a35f584206       kube-proxy-cd4m4
	761c0b37e3bd3       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   d4036057106b1       coredns-74ff55c5b-lzb66
	60a332fd5f5ef       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   f289cf8c42a9c       busybox
	e81f940befd9c       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   e26b57d35b17e       kube-controller-manager-old-k8s-version-011723
	ea0849f05ee3e       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   68eb3f2051cfa       kube-scheduler-old-k8s-version-011723
	e0f86d0bfe33b       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   7b6b5beb05e89       kube-apiserver-old-k8s-version-011723
	acc9f1ddc829d       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   862d77e5c28c7       etcd-old-k8s-version-011723
	a1e8f9e5e6f5a       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   57e50fc774581       busybox
	91179dc2e027f       ba04bb24b9575       7 minutes ago       Exited              storage-provisioner         1                   e4a3396b752cf       storage-provisioner
	c93b4416e8ffc       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   5169b70f43f3b       coredns-74ff55c5b-lzb66
	2c2b9c95f2047       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   f9a9023248a6d       kindnet-rdqkd
	f6290a74934bc       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   7885efd8707da       kube-proxy-cd4m4
	5a9ffb0bdef03       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   310f2ed043cf4       kube-apiserver-old-k8s-version-011723
	ecb0941161c01       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   f34ebf9f08469       kube-controller-manager-old-k8s-version-011723
	e76fc4569f919       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   bf5bd507cacea       kube-scheduler-old-k8s-version-011723
	f32f98ccfd632       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   9830e9e33988c       etcd-old-k8s-version-011723
	
	
	==> containerd <==
	Sep 12 23:23:52 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:23:52.282388954Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 12 23:23:52 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:23:52.282509790Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.276467592Z" level=info msg="CreateContainer within sandbox \"8536b8402b347a760910916dc213a9eff37a646ad2d6185562f562ae2d62ab9a\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.309104420Z" level=info msg="CreateContainer within sandbox \"8536b8402b347a760910916dc213a9eff37a646ad2d6185562f562ae2d62ab9a\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea\""
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.311188483Z" level=info msg="StartContainer for \"e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea\""
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.380224942Z" level=info msg="StartContainer for \"e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea\" returns successfully"
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.409213117Z" level=info msg="shim disconnected" id=e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea namespace=k8s.io
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.409276091Z" level=warning msg="cleaning up after shim disconnected" id=e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea namespace=k8s.io
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.409287061Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.437051855Z" level=warning msg="cleanup warnings time=\"2024-09-12T23:24:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.910257069Z" level=info msg="RemoveContainer for \"9e422168b4f2d9c594c354f070cdcf0beef1fca4a58ac299287c2ec6e8bd52b2\""
	Sep 12 23:24:14 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:24:14.916063156Z" level=info msg="RemoveContainer for \"9e422168b4f2d9c594c354f070cdcf0beef1fca4a58ac299287c2ec6e8bd52b2\" returns successfully"
	Sep 12 23:25:15 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:15.274623782Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:25:15 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:15.299003060Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 12 23:25:15 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:15.304398128Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 12 23:25:15 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:15.304419560Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.275855691Z" level=info msg="CreateContainer within sandbox \"8536b8402b347a760910916dc213a9eff37a646ad2d6185562f562ae2d62ab9a\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.291787307Z" level=info msg="CreateContainer within sandbox \"8536b8402b347a760910916dc213a9eff37a646ad2d6185562f562ae2d62ab9a\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a\""
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.292591891Z" level=info msg="StartContainer for \"23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a\""
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.364076987Z" level=info msg="StartContainer for \"23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a\" returns successfully"
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.388443055Z" level=info msg="shim disconnected" id=23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a namespace=k8s.io
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.388755562Z" level=warning msg="cleaning up after shim disconnected" id=23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a namespace=k8s.io
	Sep 12 23:25:42 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:42.388866036Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 12 23:25:43 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:43.182164610Z" level=info msg="RemoveContainer for \"e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea\""
	Sep 12 23:25:43 old-k8s-version-011723 containerd[570]: time="2024-09-12T23:25:43.188134823Z" level=info msg="RemoveContainer for \"e711c2685fc81641a06aa2ffe379c139a40b34354f345ca3f90af3d05f2c21ea\" returns successfully"
	
	
	==> coredns [761c0b37e3bd3fd65bc1c99066ad373975baea39f4a526a13061e89e5f038623] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:59128 - 50869 "HINFO IN 3671984939070762930.3880467561831516647. udp 57 false 512" NXDOMAIN qr,rd,ra 57 2.034883183s
	[INFO] 127.0.0.1:51662 - 32959 "HINFO IN 3671984939070762930.3880467561831516647. udp 57 false 512" NOERROR - 0 6.001300457s
	[ERROR] plugin/errors: 2 3671984939070762930.3880467561831516647. HINFO: read udp 10.244.0.2:54622->192.168.76.1:53: i/o timeout
	[INFO] 127.0.0.1:59587 - 480 "HINFO IN 3671984939070762930.3880467561831516647. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003677633s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0912 23:22:43.448178       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-12 23:22:13.44755985 +0000 UTC m=+0.021195275) (total time: 30.000520928s):
	Trace[939984059]: [30.000520928s] [30.000520928s] END
	E0912 23:22:43.448213       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0912 23:22:43.448453       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-12 23:22:13.447561959 +0000 UTC m=+0.021197375) (total time: 30.000876103s):
	Trace[1474941318]: [30.000876103s] [30.000876103s] END
	E0912 23:22:43.448465       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0912 23:22:43.448864       1 trace.go:116] Trace[140954425]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-12 23:22:13.447643509 +0000 UTC m=+0.021278926) (total time: 30.001207121s):
	Trace[140954425]: [30.001207121s] [30.001207121s] END
	E0912 23:22:43.448876       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [c93b4416e8ffc76bcfb3de94b8fa5802dd63dc41e4acc4fc544c34aa1feb1b28] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58995 - 8857 "HINFO IN 8170924649044574805.1020949474233048069. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025290821s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-011723
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-011723
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=old-k8s-version-011723
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T23_19_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 23:19:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-011723
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 23:27:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 23:23:00 +0000   Thu, 12 Sep 2024 23:19:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 23:23:00 +0000   Thu, 12 Sep 2024 23:19:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 23:23:00 +0000   Thu, 12 Sep 2024 23:19:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 23:23:00 +0000   Thu, 12 Sep 2024 23:19:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-011723
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2fe619b0021444289fc6c0a284ab2ea
	  System UUID:                86eab556-700e-47b9-a657-e438f91a4689
	  Boot ID:                    df7282e8-9021-4c1b-a6eb-f0483f23e85d
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 coredns-74ff55c5b-lzb66                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m28s
	  kube-system                 etcd-old-k8s-version-011723                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m35s
	  kube-system                 kindnet-rdqkd                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m28s
	  kube-system                 kube-apiserver-old-k8s-version-011723             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-controller-manager-old-k8s-version-011723    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-proxy-cd4m4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-old-k8s-version-011723             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 metrics-server-9975d5f86-gklxg                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m32s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-gxw94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-nbkcq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m55s (x5 over 8m55s)  kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s (x4 over 8m55s)  kubelet     Node old-k8s-version-011723 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s (x4 over 8m55s)  kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet     Node old-k8s-version-011723 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m28s                  kubelet     Node old-k8s-version-011723 status is now: NodeReady
	  Normal  Starting                 8m25s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m3s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)    kubelet     Node old-k8s-version-011723 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)    kubelet     Node old-k8s-version-011723 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m35s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [acc9f1ddc829dfa9f554d203a0dc76b0defba802e8450cc94a12fb75c99126aa] <==
	2024-09-12 23:23:58.021591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:08.020606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:18.020622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:28.020682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:38.020609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:48.020623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:24:58.020766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:08.020706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:18.020709 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:28.020754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:38.020582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:48.020629 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:25:58.020593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:08.020574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:18.020762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:28.020474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:38.020690 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:48.020664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:26:58.020833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:08.021113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:18.020721 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:28.020668 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:38.020699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:48.021040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:27:58.021395 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f32f98ccfd63279d6f3bc06a836ad9216250db5bb7e829a5947b9f9335ab97ab] <==
	raft2024/09/12 23:19:05 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/09/12 23:19:05 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/12 23:19:05 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/12 23:19:05 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-12 23:19:05.371338 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-12 23:19:05.372607 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-12 23:19:05.372703 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-12 23:19:05.372755 I | etcdserver: published {Name:old-k8s-version-011723 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-12 23:19:05.372895 I | embed: ready to serve client requests
	2024-09-12 23:19:05.374305 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-12 23:19:05.378230 I | embed: ready to serve client requests
	2024-09-12 23:19:05.384164 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-12 23:19:27.085121 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:19:29.757767 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:19:39.757978 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:19:49.757859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:19:59.757852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:09.757924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:19.757937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:29.757926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:39.757921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:49.757907 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:20:59.757973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:21:09.758089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-12 23:21:19.758020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:27:58 up  8:10,  0 users,  load average: 0.74, 1.74, 2.48
	Linux old-k8s-version-011723 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2c2b9c95f20475fd1ded65056f329c800c5b72090b93755ea84cdae9c1c49ade] <==
	I0912 23:19:34.719321       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0912 23:19:34.719345       1 metrics.go:61] Registering metrics
	I0912 23:19:34.719410       1 controller.go:374] Syncing nftables rules
	I0912 23:19:44.525245       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:19:44.525303       1 main.go:299] handling current node
	I0912 23:19:54.516312       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:19:54.516351       1 main.go:299] handling current node
	I0912 23:20:04.516217       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:04.516323       1 main.go:299] handling current node
	I0912 23:20:14.524940       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:14.524978       1 main.go:299] handling current node
	I0912 23:20:24.524947       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:24.525000       1 main.go:299] handling current node
	I0912 23:20:34.516856       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:34.516922       1 main.go:299] handling current node
	I0912 23:20:44.519579       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:44.519829       1 main.go:299] handling current node
	I0912 23:20:54.524918       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:20:54.524954       1 main.go:299] handling current node
	I0912 23:21:04.516086       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:21:04.516122       1 main.go:299] handling current node
	I0912 23:21:14.519776       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:21:14.519864       1 main.go:299] handling current node
	I0912 23:21:24.516069       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:21:24.516103       1 main.go:299] handling current node
	
	
	==> kindnet [f6ce1f61d9cdc3364f68e851d840e80c8ca80cb36c73fe5e3bebf58fffd67d64] <==
	I0912 23:25:56.071852       1 main.go:299] handling current node
	I0912 23:26:06.068157       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:06.068192       1 main.go:299] handling current node
	I0912 23:26:16.060557       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:16.060592       1 main.go:299] handling current node
	I0912 23:26:26.061179       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:26.061214       1 main.go:299] handling current node
	I0912 23:26:36.067823       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:36.068173       1 main.go:299] handling current node
	I0912 23:26:46.069876       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:46.069915       1 main.go:299] handling current node
	I0912 23:26:56.067791       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:26:56.067825       1 main.go:299] handling current node
	I0912 23:27:06.067837       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:06.067871       1 main.go:299] handling current node
	I0912 23:27:16.063465       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:16.063532       1 main.go:299] handling current node
	I0912 23:27:26.061334       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:26.061387       1 main.go:299] handling current node
	I0912 23:27:36.067795       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:36.067929       1 main.go:299] handling current node
	I0912 23:27:46.072372       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:46.072476       1 main.go:299] handling current node
	I0912 23:27:56.070190       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0912 23:27:56.070228       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5a9ffb0bdef03445db5a2e0deb47eae9b3b5c491cc48e85ec57ddea267eedb4a] <==
	I0912 23:19:12.638890       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0912 23:19:12.648376       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0912 23:19:12.652298       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0912 23:19:12.652511       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0912 23:19:13.175258       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 23:19:13.215262       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0912 23:19:13.306051       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0912 23:19:13.307289       1 controller.go:606] quota admission added evaluator for: endpoints
	I0912 23:19:13.311439       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 23:19:13.683503       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 23:19:14.286918       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0912 23:19:14.857071       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0912 23:19:14.985537       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0912 23:19:30.712590       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0912 23:19:30.758930       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0912 23:19:51.196903       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:19:51.196948       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:19:51.196958       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0912 23:20:32.410331       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:20:32.410375       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:20:32.410384       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0912 23:21:04.987924       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:21:04.988112       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:21:04.988130       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0912 23:21:24.792645       1 upgradeaware.go:373] Error proxying data from client to backend: write tcp 192.168.76.2:59782->192.168.76.2:10250: write: broken pipe
	
	
	==> kube-apiserver [e0f86d0bfe33b8095d25853a16abf60a6eb01e25bce62a9626617a714402ae4b] <==
	I0912 23:24:07.280314       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:24:07.280324       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0912 23:24:42.925731       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:24:42.925774       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:24:42.925783       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0912 23:25:09.871130       1 handler_proxy.go:102] no RequestInfo found in the context
	E0912 23:25:09.871389       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0912 23:25:09.871498       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:25:27.603537       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:25:27.603584       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:25:27.603593       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0912 23:26:10.632999       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:26:10.633060       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:26:10.633083       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0912 23:26:54.532562       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:26:54.532609       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:26:54.532618       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0912 23:27:08.504413       1 handler_proxy.go:102] no RequestInfo found in the context
	E0912 23:27:08.504492       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0912 23:27:08.504500       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 23:27:25.082001       1 client.go:360] parsed scheme: "passthrough"
	I0912 23:27:25.082050       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0912 23:27:25.082070       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [e81f940befd9c32db69b7ccd086e72889df854438d8dbfbae111448f01452340] <==
	W0912 23:23:31.018045       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:23:57.059274       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:24:02.668636       1 request.go:655] Throttling request took 1.048341157s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0912 23:24:03.520004       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:24:27.561287       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:24:35.170411       1 request.go:655] Throttling request took 1.04861441s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0912 23:24:36.022086       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:24:58.063200       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:25:07.672635       1 request.go:655] Throttling request took 1.048380054s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0912 23:25:08.524435       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:25:28.565092       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:25:40.174865       1 request.go:655] Throttling request took 1.048236597s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0912 23:25:41.026457       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:25:59.107870       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:26:12.676846       1 request.go:655] Throttling request took 1.048259507s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0912 23:26:13.528392       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:26:29.610319       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:26:45.130267       1 request.go:655] Throttling request took 1.0000563s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0912 23:26:46.030302       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:27:00.188200       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:27:17.680904       1 request.go:655] Throttling request took 1.047839369s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0912 23:27:18.532652       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0912 23:27:30.690245       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0912 23:27:50.183348       1 request.go:655] Throttling request took 1.048584954s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0912 23:27:51.034775       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [ecb0941161c01f2903ae5ed80b627c909a9be9cdbe516b50d630cc3ee9623e99] <==
	I0912 23:19:30.717542       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0912 23:19:30.725535       1 shared_informer.go:247] Caches are synced for stateful set 
	I0912 23:19:30.736782       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0912 23:19:30.741518       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0912 23:19:30.760797       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0912 23:19:30.786973       1 range_allocator.go:373] Set node old-k8s-version-011723 PodCIDR to [10.244.0.0/24]
	I0912 23:19:30.794869       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0912 23:19:30.847650       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b6hmj"
	I0912 23:19:30.864927       1 shared_informer.go:247] Caches are synced for endpoint 
	I0912 23:19:30.865400       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0912 23:19:30.993538       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lzb66"
	I0912 23:19:30.993578       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cd4m4"
	I0912 23:19:31.019516       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rdqkd"
	I0912 23:19:31.048904       1 shared_informer.go:247] Caches are synced for resource quota 
	I0912 23:19:31.068646       1 shared_informer.go:247] Caches are synced for resource quota 
	I0912 23:19:31.165791       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0912 23:19:31.204157       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3a79f40c-c3f2-4416-b5d5-ec941e4e0c3a", ResourceVersion:"266", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63861779955, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001db74a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001db74c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001db74e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001db7500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001db7520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001db7540), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001db7560)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001db75a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001da5800), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000e8b9c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000ae87e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbe80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000e8ba10)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0912 23:19:31.363206       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0912 23:19:31.363230       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0912 23:19:31.366078       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0912 23:19:32.247887       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0912 23:19:32.309610       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-b6hmj"
	I0912 23:19:35.666875       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0912 23:21:25.724288       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0912 23:21:25.835050       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [94347fb65e2d7117514ca1de8cb6276745b15849b93c7affe7df10dbebbf9168] <==
	I0912 23:22:23.473234       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0912 23:22:23.473565       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0912 23:22:23.493076       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0912 23:22:23.493342       1 server_others.go:185] Using iptables Proxier.
	I0912 23:22:23.493935       1 server.go:650] Version: v1.20.0
	I0912 23:22:23.495074       1 config.go:315] Starting service config controller
	I0912 23:22:23.497345       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0912 23:22:23.496347       1 config.go:224] Starting endpoint slice config controller
	I0912 23:22:23.500851       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0912 23:22:23.597704       1 shared_informer.go:247] Caches are synced for service config 
	I0912 23:22:23.601051       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [f6290a74934bc9024e7157150e63817e6e4d073704f09667759df0a0cced6b71] <==
	I0912 23:19:33.573271       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0912 23:19:33.573357       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0912 23:19:33.619819       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0912 23:19:33.619946       1 server_others.go:185] Using iptables Proxier.
	I0912 23:19:33.620187       1 server.go:650] Version: v1.20.0
	I0912 23:19:33.620661       1 config.go:315] Starting service config controller
	I0912 23:19:33.620669       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0912 23:19:33.621266       1 config.go:224] Starting endpoint slice config controller
	I0912 23:19:33.621272       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0912 23:19:33.720779       1 shared_informer.go:247] Caches are synced for service config 
	I0912 23:19:33.721404       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [e76fc4569f9197f1a6d467c766d1036a47241f029e1d985fb693b6a76a3fa1c9] <==
	W0912 23:19:11.780104       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 23:19:11.780165       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 23:19:11.780177       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 23:19:11.780183       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 23:19:11.893695       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0912 23:19:11.895988       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:19:11.896206       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:19:11.896361       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0912 23:19:11.926354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 23:19:11.927454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 23:19:11.927542       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 23:19:11.928441       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 23:19:11.929052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 23:19:11.929802       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 23:19:11.929865       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 23:19:11.929629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 23:19:11.929683       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 23:19:11.929743       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:19:11.934886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 23:19:11.958489       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 23:19:12.824554       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 23:19:12.952242       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 23:19:12.971555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 23:19:13.011987       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0912 23:19:13.296728       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [ea0849f05ee3ec463fbcc693795336241b92e45201b97c6b584ebd5baa093cbb] <==
	I0912 23:22:00.794537       1 serving.go:331] Generated self-signed cert in-memory
	W0912 23:22:07.223860       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0912 23:22:07.223945       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 23:22:07.223979       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 23:22:07.224004       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 23:22:07.522791       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0912 23:22:07.524046       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:22:07.524069       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 23:22:07.524128       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0912 23:22:07.725719       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 12 23:26:14 old-k8s-version-011723 kubelet[664]: E0912 23:26:14.273888     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: I0912 23:26:22.273175     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:26:22 old-k8s-version-011723 kubelet[664]: E0912 23:26:22.273505     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:26:27 old-k8s-version-011723 kubelet[664]: E0912 23:26:27.278136     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: I0912 23:26:36.273368     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:26:36 old-k8s-version-011723 kubelet[664]: E0912 23:26:36.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:26:38 old-k8s-version-011723 kubelet[664]: E0912 23:26:38.273832     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:26:49 old-k8s-version-011723 kubelet[664]: E0912 23:26:49.273962     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: I0912 23:26:50.273189     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:26:50 old-k8s-version-011723 kubelet[664]: E0912 23:26:50.273517     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:27:01 old-k8s-version-011723 kubelet[664]: E0912 23:27:01.273890     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: I0912 23:27:04.273178     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:27:04 old-k8s-version-011723 kubelet[664]: E0912 23:27:04.273536     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:27:15 old-k8s-version-011723 kubelet[664]: E0912 23:27:15.274398     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: I0912 23:27:18.273245     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:27:18 old-k8s-version-011723 kubelet[664]: E0912 23:27:18.273947     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:27:30 old-k8s-version-011723 kubelet[664]: E0912 23:27:30.273978     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: I0912 23:27:33.273393     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:27:33 old-k8s-version-011723 kubelet[664]: E0912 23:27:33.273795     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:27:42 old-k8s-version-011723 kubelet[664]: E0912 23:27:42.274058     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:27:47 old-k8s-version-011723 kubelet[664]: I0912 23:27:47.273179     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:27:47 old-k8s-version-011723 kubelet[664]: E0912 23:27:47.273941     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	Sep 12 23:27:54 old-k8s-version-011723 kubelet[664]: E0912 23:27:54.273872     664 pod_workers.go:191] Error syncing pod 1150ff82-3fff-429c-aef1-349e4c755bda ("metrics-server-9975d5f86-gklxg_kube-system(1150ff82-3fff-429c-aef1-349e4c755bda)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 12 23:27:58 old-k8s-version-011723 kubelet[664]: I0912 23:27:58.273201     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: 23ddd9c911f9ae9e32ab0ddb7797d7d891c0ddb41354e75d9185df9ba842120a
	Sep 12 23:27:58 old-k8s-version-011723 kubelet[664]: E0912 23:27:58.273548     664 pod_workers.go:191] Error syncing pod 1e945ecb-a33d-4b6a-9bd7-caa823971a60 ("dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-gxw94_kubernetes-dashboard(1e945ecb-a33d-4b6a-9bd7-caa823971a60)"
	
	
	==> kubernetes-dashboard [3bea0dd674f7bd0c10cafef9ebed5cc94e155908f2a0a081b7ea53cc3d5699b8] <==
	2024/09/12 23:22:30 Using namespace: kubernetes-dashboard
	2024/09/12 23:22:30 Using in-cluster config to connect to apiserver
	2024/09/12 23:22:30 Using secret token for csrf signing
	2024/09/12 23:22:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/12 23:22:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/12 23:22:30 Successful initial request to the apiserver, version: v1.20.0
	2024/09/12 23:22:30 Generating JWE encryption key
	2024/09/12 23:22:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/12 23:22:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/12 23:22:31 Initializing JWE encryption key from synchronized object
	2024/09/12 23:22:31 Creating in-cluster Sidecar client
	2024/09/12 23:22:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:22:31 Serving insecurely on HTTP port: 9090
	2024/09/12 23:23:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:23:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:24:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:24:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:25:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:25:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:26:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:26:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:27:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:27:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/12 23:22:30 Starting overwatch
	
	
	==> storage-provisioner [1b307b6cba68d6c8149923a5069fe408b55a8df7afffe8f6b8482bde9ab7acb9] <==
	I0912 23:22:24.374525       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:22:24.392494       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:22:24.392828       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:22:41.869533       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:22:41.869926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-011723_68fc8e14-f8ca-404c-a536-d9f9fdbb5e8f!
	I0912 23:22:41.869732       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0500b77-7c87-46c0-88c0-7d5184862bf0", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-011723_68fc8e14-f8ca-404c-a536-d9f9fdbb5e8f became leader
	I0912 23:22:41.970322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-011723_68fc8e14-f8ca-404c-a536-d9f9fdbb5e8f!
	
	
	==> storage-provisioner [91179dc2e027fa651d685d7cdf05c0aafca67932f357c8b73667610dd490b299] <==
	I0912 23:20:03.600865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 23:20:03.619858       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 23:20:03.619903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 23:20:03.632545       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 23:20:03.632899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-011723_94c33610-e68f-48ef-9266-61e70d52b772!
	I0912 23:20:03.634203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0500b77-7c87-46c0-88c0-7d5184862bf0", APIVersion:"v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-011723_94c33610-e68f-48ef-9266-61e70d52b772 became leader
	I0912 23:20:03.733308       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-011723_94c33610-e68f-48ef-9266-61e70d52b772!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-011723 -n old-k8s-version-011723
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-011723 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-gklxg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-011723 describe pod metrics-server-9975d5f86-gklxg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-011723 describe pod metrics-server-9975d5f86-gklxg: exit status 1 (417.301719ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-gklxg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-011723 describe pod metrics-server-9975d5f86-gklxg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.66s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.75
4 TestDownloadOnly/v1.20.0/preload-exists 0.01
8 TestDownloadOnly/v1.20.0/LogsDuration 0.25
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 8.09
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 267.25
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.16
34 TestAddons/parallel/Ingress 19.97
35 TestAddons/parallel/InspektorGadget 10.87
36 TestAddons/parallel/MetricsServer 6.96
39 TestAddons/parallel/CSI 58.7
40 TestAddons/parallel/Headlamp 14.87
41 TestAddons/parallel/CloudSpanner 6.78
42 TestAddons/parallel/LocalPath 51.84
43 TestAddons/parallel/NvidiaDevicePlugin 6.91
44 TestAddons/parallel/Yakd 11.9
45 TestAddons/StoppedEnableDisable 12.23
46 TestCertOptions 33.78
47 TestCertExpiration 234.03
49 TestForceSystemdFlag 32.85
50 TestForceSystemdEnv 40.79
51 TestDockerEnvContainerd 47.4
56 TestErrorSpam/setup 28.19
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.17
59 TestErrorSpam/pause 1.77
60 TestErrorSpam/unpause 1.9
61 TestErrorSpam/stop 1.48
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 88.55
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.37
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
73 TestFunctional/serial/CacheCmd/cache/add_local 1.23
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 49.06
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.77
84 TestFunctional/serial/LogsFileCmd 1.69
85 TestFunctional/serial/InvalidService 4.86
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 7.58
89 TestFunctional/parallel/DryRun 0.42
90 TestFunctional/parallel/InternationalLanguage 0.23
91 TestFunctional/parallel/StatusCmd 1.21
95 TestFunctional/parallel/ServiceCmdConnect 9.69
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 26.61
99 TestFunctional/parallel/SSHCmd 0.8
100 TestFunctional/parallel/CpCmd 2.04
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.18
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
111 TestFunctional/parallel/License 0.57
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.33
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
125 TestFunctional/parallel/ServiceCmd/List 0.59
126 TestFunctional/parallel/ProfileCmd/profile_list 0.45
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
129 TestFunctional/parallel/MountCmd/any-port 8.42
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
131 TestFunctional/parallel/ServiceCmd/Format 0.45
132 TestFunctional/parallel/ServiceCmd/URL 0.49
133 TestFunctional/parallel/MountCmd/specific-port 2.42
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.48
135 TestFunctional/parallel/Version/short 0.08
136 TestFunctional/parallel/Version/components 1.32
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
142 TestFunctional/parallel/ImageCommands/Setup 0.91
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 133.48
160 TestMultiControlPlane/serial/DeployApp 33.15
161 TestMultiControlPlane/serial/PingHostFromPods 1.56
162 TestMultiControlPlane/serial/AddWorkerNode 21.36
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.18
165 TestMultiControlPlane/serial/CopyFile 19.34
166 TestMultiControlPlane/serial/StopSecondaryNode 12.82
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 30.11
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 137.38
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.46
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
173 TestMultiControlPlane/serial/StopCluster 36
174 TestMultiControlPlane/serial/RestartCluster 79.31
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
176 TestMultiControlPlane/serial/AddSecondaryNode 44.44
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
181 TestJSONOutput/start/Command 50.3
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.72
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.81
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 39.91
207 TestKicCustomNetwork/use_default_bridge_network 34.2
208 TestKicExistingNetwork 32.58
209 TestKicCustomSubnet 35.23
210 TestKicStaticIP 31.51
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 67.09
215 TestMountStart/serial/StartWithMountFirst 5.91
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 6.92
218 TestMountStart/serial/VerifyMountSecond 0.28
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.59
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 68.9
227 TestMultiNode/serial/DeployApp2Nodes 17.2
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 18.75
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.35
232 TestMultiNode/serial/CopyFile 10.21
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.52
235 TestMultiNode/serial/RestartKeepsNodes 99.68
236 TestMultiNode/serial/DeleteNode 5.58
237 TestMultiNode/serial/StopMultiNode 24.11
238 TestMultiNode/serial/RestartMultiNode 52.47
239 TestMultiNode/serial/ValidateNameConflict 34.27
244 TestPreload 112.53
246 TestScheduledStopUnix 108.56
249 TestInsufficientStorage 10.08
250 TestRunningBinaryUpgrade 88.2
252 TestKubernetesUpgrade 349.32
253 TestMissingContainerUpgrade 170.63
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 40.78
257 TestNoKubernetes/serial/StartWithStopK8s 10.38
258 TestNoKubernetes/serial/Start 6.59
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
260 TestNoKubernetes/serial/ProfileList 0.71
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.61
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
264 TestStoppedBinaryUpgrade/Setup 0.62
265 TestStoppedBinaryUpgrade/Upgrade 97.65
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
275 TestPause/serial/Start 96.09
283 TestNetworkPlugins/group/false 4.9
284 TestPause/serial/SecondStartNoReconfiguration 7.28
288 TestPause/serial/Pause 0.88
289 TestPause/serial/VerifyStatus 0.37
290 TestPause/serial/Unpause 0.89
291 TestPause/serial/PauseAgain 1.11
292 TestPause/serial/DeletePaused 3.55
293 TestPause/serial/VerifyDeletedResources 0.21
295 TestStartStop/group/old-k8s-version/serial/FirstStart 164.05
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.62
298 TestStartStop/group/no-preload/serial/FirstStart 73.21
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.46
300 TestStartStop/group/old-k8s-version/serial/Stop 12.32
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
303 TestStartStop/group/no-preload/serial/DeployApp 10.38
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
305 TestStartStop/group/no-preload/serial/Stop 12.08
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 290.15
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
311 TestStartStop/group/no-preload/serial/Pause 3.81
313 TestStartStop/group/embed-certs/serial/FirstStart 93.09
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.32
317 TestStartStop/group/old-k8s-version/serial/Pause 3.91
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.6
320 TestStartStop/group/embed-certs/serial/DeployApp 9.36
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
322 TestStartStop/group/embed-certs/serial/Stop 12.45
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.29
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/embed-certs/serial/SecondStart 268.33
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272.77
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.15
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
333 TestStartStop/group/embed-certs/serial/Pause 3.43
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/FirstStart 48.77
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.8
340 TestNetworkPlugins/group/auto/Start 97.87
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.41
343 TestStartStop/group/newest-cni/serial/Stop 1.32
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
345 TestStartStop/group/newest-cni/serial/SecondStart 16.24
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
349 TestStartStop/group/newest-cni/serial/Pause 3.19
350 TestNetworkPlugins/group/kindnet/Start 82.17
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 10.28
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestNetworkPlugins/group/calico/Start 70.23
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
359 TestNetworkPlugins/group/kindnet/NetCatPod 9.35
360 TestNetworkPlugins/group/kindnet/DNS 0.36
361 TestNetworkPlugins/group/kindnet/Localhost 0.31
362 TestNetworkPlugins/group/kindnet/HairPin 0.25
363 TestNetworkPlugins/group/custom-flannel/Start 52.19
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.44
366 TestNetworkPlugins/group/calico/NetCatPod 11.45
367 TestNetworkPlugins/group/calico/DNS 0.26
368 TestNetworkPlugins/group/calico/Localhost 0.19
369 TestNetworkPlugins/group/calico/HairPin 0.19
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.37
372 TestNetworkPlugins/group/custom-flannel/DNS 0.23
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
375 TestNetworkPlugins/group/enable-default-cni/Start 50.8
376 TestNetworkPlugins/group/flannel/Start 54.51
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 75.26
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.48
385 TestNetworkPlugins/group/flannel/NetCatPod 10.34
386 TestNetworkPlugins/group/flannel/DNS 0.22
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.3
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (11.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-570075 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-570075 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.754460678s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-570075
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-570075: exit status 85 (246.546458ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-570075 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |          |
	|         | -p download-only-570075        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:29:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:29:12.637330 1597765 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:29:12.637535 1597765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:12.637565 1597765 out.go:358] Setting ErrFile to fd 2...
	I0912 22:29:12.637590 1597765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:12.637880 1597765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	W0912 22:29:12.638040 1597765 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-1592376/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-1592376/.minikube/config/config.json: no such file or directory
	I0912 22:29:12.638493 1597765 out.go:352] Setting JSON to true
	I0912 22:29:12.639389 1597765 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25880,"bootTime":1726154273,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 22:29:12.639491 1597765 start.go:139] virtualization:  
	I0912 22:29:12.642352 1597765 out.go:97] [download-only-570075] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0912 22:29:12.642563 1597765 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 22:29:12.642611 1597765 notify.go:220] Checking for updates...
	I0912 22:29:12.644670 1597765 out.go:169] MINIKUBE_LOCATION=19616
	I0912 22:29:12.647106 1597765 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:29:12.649756 1597765 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:29:12.651951 1597765 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 22:29:12.653864 1597765 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0912 22:29:12.657456 1597765 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 22:29:12.657745 1597765 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:29:12.689877 1597765 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:29:12.689983 1597765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:12.746808 1597765 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:29:12.737306275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:12.746917 1597765 docker.go:318] overlay module found
	I0912 22:29:12.748840 1597765 out.go:97] Using the docker driver based on user configuration
	I0912 22:29:12.748869 1597765 start.go:297] selected driver: docker
	I0912 22:29:12.748878 1597765 start.go:901] validating driver "docker" against <nil>
	I0912 22:29:12.749003 1597765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:12.805543 1597765 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:29:12.796172479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:12.805712 1597765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:29:12.806002 1597765 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0912 22:29:12.806166 1597765 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 22:29:12.808368 1597765 out.go:169] Using Docker driver with root privileges
	I0912 22:29:12.810390 1597765 cni.go:84] Creating CNI manager for ""
	I0912 22:29:12.810417 1597765 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 22:29:12.810429 1597765 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 22:29:12.810522 1597765 start.go:340] cluster config:
	{Name:download-only-570075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-570075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:12.812638 1597765 out.go:97] Starting "download-only-570075" primary control-plane node in "download-only-570075" cluster
	I0912 22:29:12.812670 1597765 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0912 22:29:12.814566 1597765 out.go:97] Pulling base image v0.0.45-1726156396-19616 ...
	I0912 22:29:12.814598 1597765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 22:29:12.814777 1597765 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 22:29:12.830484 1597765 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 22:29:12.830677 1597765 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 22:29:12.830788 1597765 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 22:29:12.876655 1597765 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0912 22:29:12.876688 1597765 cache.go:56] Caching tarball of preloaded images
	I0912 22:29:12.877360 1597765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 22:29:12.879458 1597765 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 22:29:12.879480 1597765 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0912 22:29:12.964539 1597765 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0912 22:29:18.036891 1597765 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0912 22:29:18.037072 1597765 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0912 22:29:19.156308 1597765 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0912 22:29:19.156664 1597765 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/download-only-570075/config.json ...
	I0912 22:29:19.156697 1597765 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/download-only-570075/config.json: {Name:mkace30ab822713a68486273ee43eb7c39d352ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:29:19.156885 1597765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0912 22:29:19.157094 1597765 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-570075 host does not exist
	  To start a cluster, run: "minikube start -p download-only-570075"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-570075
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-754658 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-754658 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.09016633s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-754658
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-754658: exit status 85 (75.662423ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-570075 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | -p download-only-570075        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| delete  | -p download-only-570075        | download-only-570075 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC | 12 Sep 24 22:29 UTC |
	| start   | -o=json --download-only        | download-only-754658 | jenkins | v1.34.0 | 12 Sep 24 22:29 UTC |                     |
	|         | -p download-only-754658        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 22:29:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:29:24.993303 1597964 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:29:24.993420 1597964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:24.993431 1597964 out.go:358] Setting ErrFile to fd 2...
	I0912 22:29:24.993436 1597964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:29:24.993717 1597964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:29:24.994139 1597964 out.go:352] Setting JSON to true
	I0912 22:29:24.995053 1597964 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25892,"bootTime":1726154273,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 22:29:24.995126 1597964 start.go:139] virtualization:  
	I0912 22:29:24.997521 1597964 out.go:97] [download-only-754658] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 22:29:24.997701 1597964 notify.go:220] Checking for updates...
	I0912 22:29:24.999319 1597964 out.go:169] MINIKUBE_LOCATION=19616
	I0912 22:29:25.012442 1597964 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:29:25.014874 1597964 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:29:25.016896 1597964 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 22:29:25.019059 1597964 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0912 22:29:25.023134 1597964 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 22:29:25.023444 1597964 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:29:25.052495 1597964 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:29:25.052579 1597964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:25.114928 1597964 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 22:29:25.104643455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:25.115050 1597964 docker.go:318] overlay module found
	I0912 22:29:25.117184 1597964 out.go:97] Using the docker driver based on user configuration
	I0912 22:29:25.117224 1597964 start.go:297] selected driver: docker
	I0912 22:29:25.117241 1597964 start.go:901] validating driver "docker" against <nil>
	I0912 22:29:25.117371 1597964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:29:25.169103 1597964 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 22:29:25.159245577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:29:25.169271 1597964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 22:29:25.169557 1597964 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0912 22:29:25.169718 1597964 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 22:29:25.172507 1597964 out.go:169] Using Docker driver with root privileges
	I0912 22:29:25.175136 1597964 cni.go:84] Creating CNI manager for ""
	I0912 22:29:25.175158 1597964 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0912 22:29:25.175172 1597964 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 22:29:25.175260 1597964 start.go:340] cluster config:
	{Name:download-only-754658 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-754658 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:29:25.177353 1597964 out.go:97] Starting "download-only-754658" primary control-plane node in "download-only-754658" cluster
	I0912 22:29:25.177380 1597964 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0912 22:29:25.179759 1597964 out.go:97] Pulling base image v0.0.45-1726156396-19616 ...
	I0912 22:29:25.179796 1597964 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 22:29:25.179850 1597964 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 22:29:25.195687 1597964 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 22:29:25.195852 1597964 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 22:29:25.195876 1597964 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 22:29:25.195884 1597964 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 22:29:25.195893 1597964 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 22:29:25.236344 1597964 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0912 22:29:25.236371 1597964 cache.go:56] Caching tarball of preloaded images
	I0912 22:29:25.237132 1597964 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0912 22:29:25.239129 1597964 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0912 22:29:25.239146 1597964 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0912 22:29:25.356968 1597964 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19616-1592376/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-754658 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754658"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-754658
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-332333 --alsologtostderr --binary-mirror http://127.0.0.1:43185 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-332333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-332333
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-509957
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-509957: exit status 85 (54.490706ms)

                                                
                                                
-- stdout --
	* Profile "addons-509957" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-509957"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-509957
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-509957: exit status 85 (76.685386ms)

                                                
                                                
-- stdout --
	* Profile "addons-509957" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-509957"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (267.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-509957 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-509957 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (4m27.247355814s)
--- PASS: TestAddons/Setup (267.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-509957 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-509957 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.140793ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-qlq9q" [31441b74-a88f-48b1-bd9f-37c0b02ea6a0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009416516s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q6ksf" [c1a39170-c345-46f4-845f-d000efef9490] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00404201s
addons_test.go:342: (dbg) Run:  kubectl --context addons-509957 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-509957 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-509957 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.152725708s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 ip
2024/09/12 22:37:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.16s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-509957 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-509957 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-509957 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [97459155-3ad6-4119-af39-742f3719df3f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [97459155-3ad6-4119-af39-742f3719df3f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004186746s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-509957 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable ingress-dns --alsologtostderr -v=1: (1.149030017s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable ingress --alsologtostderr -v=1: (8.02617957s)
--- PASS: TestAddons/parallel/Ingress (19.97s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-57k9z" [431a5873-4f16-4d23-a2a8-0c987ea08142] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005045125s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-509957
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-509957: (5.860664106s)
--- PASS: TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.823776ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-g4znh" [839c9ade-c469-4f4d-8fb0-9d230575561b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.009294072s
addons_test.go:417: (dbg) Run:  kubectl --context addons-509957 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.478395ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-509957 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-509957 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6cd5c7f9-84a4-4eb6-84a6-48759182fb21] Pending
helpers_test.go:344: "task-pv-pod" [6cd5c7f9-84a4-4eb6-84a6-48759182fb21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6cd5c7f9-84a4-4eb6-84a6-48759182fb21] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003292699s
addons_test.go:590: (dbg) Run:  kubectl --context addons-509957 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-509957 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-509957 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-509957 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-509957 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-509957 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-509957 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f9e70d72-4897-4473-a661-a23f2a2b8e08] Pending
helpers_test.go:344: "task-pv-pod-restore" [f9e70d72-4897-4473-a661-a23f2a2b8e08] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f9e70d72-4897-4473-a661-a23f2a2b8e08] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003553539s
addons_test.go:632: (dbg) Run:  kubectl --context addons-509957 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-509957 delete pod task-pv-pod-restore: (1.457471543s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-509957 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-509957 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.806512012s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable volumesnapshots --alsologtostderr -v=1: (1.327221429s)
--- PASS: TestAddons/parallel/CSI (58.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-509957 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-509957 --alsologtostderr -v=1: (1.052634921s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-m6bwq" [b364545f-4da0-4317-beb3-5e1cd12cc210] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-m6bwq" [b364545f-4da0-4317-beb3-5e1cd12cc210] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.00409477s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable headlamp --alsologtostderr -v=1: (5.810748029s)
--- PASS: TestAddons/parallel/Headlamp (14.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-rnhdg" [0bdcdac1-bde1-4931-b594-c912de7602eb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005217664s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-509957
--- PASS: TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-509957 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-509957 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bf82ddb8-b788-4c28-988f-60e2f0facae4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bf82ddb8-b788-4c28-988f-60e2f0facae4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bf82ddb8-b788-4c28-988f-60e2f0facae4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003898817s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-509957 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 ssh "cat /opt/local-path-provisioner/pvc-44902031-247f-424a-89f5-58baf788752c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-509957 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-509957 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.601368791s)
--- PASS: TestAddons/parallel/LocalPath (51.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c7dzm" [43c65b34-f0d5-4bfd-9348-e239e413a3cb] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004196072s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-509957
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qql9d" [546c4cdf-ded5-47bd-9337-191faa1d3fe6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004605047s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-509957 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-509957 addons disable yakd --alsologtostderr -v=1: (5.892103952s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.23s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-509957
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-509957: (11.963977652s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-509957
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-509957
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-509957
--- PASS: TestAddons/StoppedEnableDisable (12.23s)

                                                
                                    
x
+
TestCertOptions (33.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-713058 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-713058 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.115938689s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-713058 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-713058 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-713058 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-713058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-713058
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-713058: (2.012418872s)
--- PASS: TestCertOptions (33.78s)

                                                
                                    
x
+
TestCertExpiration (234.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-905537 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-905537 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (44.184090109s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-905537 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-905537 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.465639813s)
helpers_test.go:175: Cleaning up "cert-expiration-905537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-905537
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-905537: (2.379592318s)
--- PASS: TestCertExpiration (234.03s)

                                                
                                    
x
+
TestForceSystemdFlag (32.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-862143 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-862143 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.522654271s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-862143 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-862143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-862143
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-862143: (2.03989783s)
--- PASS: TestForceSystemdFlag (32.85s)

                                                
                                    
x
+
TestForceSystemdEnv (40.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-098328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-098328 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.856822548s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-098328 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-098328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-098328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-098328: (2.520081899s)
--- PASS: TestForceSystemdEnv (40.79s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.4s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-583805 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-583805 --driver=docker  --container-runtime=containerd: (31.708908454s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-583805"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-583805": (1.027448605s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Z2wgvv3yq1Oc/agent.1618093" SSH_AGENT_PID="1618094" DOCKER_HOST=ssh://docker@127.0.0.1:34644 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Z2wgvv3yq1Oc/agent.1618093" SSH_AGENT_PID="1618094" DOCKER_HOST=ssh://docker@127.0.0.1:34644 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Z2wgvv3yq1Oc/agent.1618093" SSH_AGENT_PID="1618094" DOCKER_HOST=ssh://docker@127.0.0.1:34644 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.327948123s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Z2wgvv3yq1Oc/agent.1618093" SSH_AGENT_PID="1618094" DOCKER_HOST=ssh://docker@127.0.0.1:34644 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-583805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-583805
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-583805: (1.956865771s)
--- PASS: TestDockerEnvContainerd (47.40s)

                                                
                                    
x
+
TestErrorSpam/setup (28.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-239883 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-239883 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-239883 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-239883 --driver=docker  --container-runtime=containerd: (28.191109981s)
--- PASS: TestErrorSpam/setup (28.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 stop: (1.286056089s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-239883 --log_dir /tmp/nospam-239883 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-1592376/.minikube/files/etc/test/nested/copy/1597760/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-209375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m28.550415191s)
--- PASS: TestFunctional/serial/StartWithProxy (88.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-209375 --alsologtostderr -v=8: (6.366639992s)
functional_test.go:663: soft start took 6.372470911s for "functional-209375" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-209375 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:3.1: (1.591212415s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:3.3: (1.279854776s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 cache add registry.k8s.io/pause:latest: (1.259191083s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-209375 /tmp/TestFunctionalserialCacheCmdcacheadd_local626779896/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache add minikube-local-cache-test:functional-209375
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache delete minikube-local-cache-test:functional-209375
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-209375
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (309.242066ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 cache reload: (1.123546811s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 kubectl -- --context functional-209375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-209375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-209375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.063586208s)
functional_test.go:761: restart took 49.063696475s for "functional-209375" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-209375 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 logs: (1.77435032s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 logs --file /tmp/TestFunctionalserialLogsFileCmd2752835367/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 logs --file /tmp/TestFunctionalserialLogsFileCmd2752835367/001/logs.txt: (1.690680953s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-209375 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-209375
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-209375: exit status 115 (606.298404ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31333 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-209375 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 config get cpus: exit status 14 (73.182723ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 config get cpus: exit status 14 (75.384613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-209375 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-209375 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1632538: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-209375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (189.623462ms)

                                                
                                                
-- stdout --
	* [functional-209375] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:44:26.478282 1632234 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:44:26.478479 1632234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:44:26.478506 1632234 out.go:358] Setting ErrFile to fd 2...
	I0912 22:44:26.478527 1632234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:44:26.478788 1632234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:44:26.479511 1632234 out.go:352] Setting JSON to false
	I0912 22:44:26.480570 1632234 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26794,"bootTime":1726154273,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 22:44:26.480667 1632234 start.go:139] virtualization:  
	I0912 22:44:26.482948 1632234 out.go:177] * [functional-209375] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 22:44:26.485226 1632234 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:44:26.485377 1632234 notify.go:220] Checking for updates...
	I0912 22:44:26.488574 1632234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:44:26.490047 1632234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:44:26.491673 1632234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 22:44:26.493421 1632234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 22:44:26.494913 1632234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:44:26.496981 1632234 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:44:26.497591 1632234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:44:26.518942 1632234 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:44:26.519356 1632234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:44:26.599211 1632234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:44:26.589866191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:44:26.599320 1632234 docker.go:318] overlay module found
	I0912 22:44:26.602403 1632234 out.go:177] * Using the docker driver based on existing profile
	I0912 22:44:26.604123 1632234 start.go:297] selected driver: docker
	I0912 22:44:26.604139 1632234 start.go:901] validating driver "docker" against &{Name:functional-209375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-209375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:44:26.604264 1632234 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:44:26.606320 1632234 out.go:201] 
	W0912 22:44:26.607775 1632234 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 22:44:26.609512 1632234 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-209375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-209375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (232.007061ms)

                                                
                                                
-- stdout --
	* [functional-209375] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:44:26.268103 1632146 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:44:26.268276 1632146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:44:26.268286 1632146 out.go:358] Setting ErrFile to fd 2...
	I0912 22:44:26.268303 1632146 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:44:26.269618 1632146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:44:26.270061 1632146 out.go:352] Setting JSON to false
	I0912 22:44:26.271157 1632146 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26794,"bootTime":1726154273,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 22:44:26.271229 1632146 start.go:139] virtualization:  
	I0912 22:44:26.276413 1632146 out.go:177] * [functional-209375] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0912 22:44:26.278572 1632146 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:44:26.278631 1632146 notify.go:220] Checking for updates...
	I0912 22:44:26.283201 1632146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:44:26.285535 1632146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 22:44:26.287264 1632146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 22:44:26.289057 1632146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 22:44:26.291370 1632146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:44:26.294067 1632146 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:44:26.294589 1632146 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:44:26.323984 1632146 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:44:26.324090 1632146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:44:26.401290 1632146 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:44:26.390013225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:44:26.401418 1632146 docker.go:318] overlay module found
	I0912 22:44:26.404269 1632146 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0912 22:44:26.406270 1632146 start.go:297] selected driver: docker
	I0912 22:44:26.406289 1632146 start.go:901] validating driver "docker" against &{Name:functional-209375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-209375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:44:26.406421 1632146 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:44:26.409103 1632146 out.go:201] 
	W0912 22:44:26.410867 1632146 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 22:44:26.412526 1632146 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-209375 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-209375 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xkcrx" [4dbc1a16-53ee-44f6-9c01-bb07bac840d5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0912 22:44:04.755250 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-xkcrx" [4dbc1a16-53ee-44f6-9c01-bb07bac840d5] Running
E0912 22:44:12.438315 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004165987s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31292
functional_test.go:1675: http://192.168.49.2:31292: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-xkcrx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31292
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8d51aaf6-6e0a-4704-9f5e-c0da5246ab6b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00349885s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-209375 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-209375 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-209375 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-209375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a4d210ec-1544-44af-95c8-d8b51f96d403] Pending
E0912 22:44:02.184318 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.191319 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.202905 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.224430 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.265911 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.347353 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.509139 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:02.830707 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [a4d210ec-1544-44af-95c8-d8b51f96d403] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0912 22:44:03.472678 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [a4d210ec-1544-44af-95c8-d8b51f96d403] Running
E0912 22:44:07.316922 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003148812s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-209375 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-209375 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-209375 delete -f testdata/storage-provisioner/pod.yaml: (1.540238262s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-209375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [806119ca-1e07-44a2-8f92-32ef7f3e3ad7] Pending
helpers_test.go:344: "sp-pod" [806119ca-1e07-44a2-8f92-32ef7f3e3ad7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003384024s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-209375 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh -n functional-209375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cp functional-209375:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3757078375/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh -n functional-209375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh -n functional-209375 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1597760/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /etc/test/nested/copy/1597760/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1597760.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /etc/ssl/certs/1597760.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1597760.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /usr/share/ca-certificates/1597760.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15977602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /etc/ssl/certs/15977602.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15977602.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /usr/share/ca-certificates/15977602.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-209375 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "sudo systemctl is-active docker": exit status 1 (362.730093ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "sudo systemctl is-active crio": exit status 1 (262.122346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1629771: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-209375 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ccac5026-39d0-4c8f-9370-48179d0cd217] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ccac5026-39d0-4c8f-9370-48179d0cd217] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003958999s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-209375 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.93.243 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-209375 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-209375 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-209375 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7f8dl" [a234b1e5-a1ce-49e1-9825-8d9139f4a764] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7f8dl" [a234b1e5-a1ce-49e1-9825-8d9139f4a764] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003951936s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
E0912 22:44:22.679907 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1315: Took "396.560582ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.962307ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "425.390558ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "72.658478ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service list -o json
functional_test.go:1494: Took "603.877351ms" to run "out/minikube-linux-arm64 -p functional-209375 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdany-port3800920563/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726181063418840834" to /tmp/TestFunctionalparallelMountCmdany-port3800920563/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726181063418840834" to /tmp/TestFunctionalparallelMountCmdany-port3800920563/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726181063418840834" to /tmp/TestFunctionalparallelMountCmdany-port3800920563/001/test-1726181063418840834
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.074212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 22:44 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 22:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 22:44 test-1726181063418840834
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh cat /mount-9p/test-1726181063418840834
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-209375 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d084bedb-12a5-44e6-9e21-e7d6b2ca3e13] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d084bedb-12a5-44e6-9e21-e7d6b2ca3e13] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d084bedb-12a5-44e6-9e21-e7d6b2ca3e13] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004880472s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-209375 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdany-port3800920563/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30428
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdspecific-port3120703594/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.003987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdspecific-port3120703594/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "sudo umount -f /mount-9p": exit status 1 (334.739879ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-209375 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdspecific-port3120703594/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2024/09/12 22:44:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T" /mount1: exit status 1 (786.637503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-209375 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-209375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup327837968/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 version -o=json --components: (1.323581186s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-209375 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-209375
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-209375
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-209375 image ls --format short --alsologtostderr:
I0912 22:44:43.403678 1635456 out.go:345] Setting OutFile to fd 1 ...
I0912 22:44:43.403957 1635456 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.403970 1635456 out.go:358] Setting ErrFile to fd 2...
I0912 22:44:43.403976 1635456 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.404300 1635456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
I0912 22:44:43.405017 1635456 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.405192 1635456 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.405718 1635456 cli_runner.go:164] Run: docker container inspect functional-209375 --format={{.State.Status}}
I0912 22:44:43.441166 1635456 ssh_runner.go:195] Run: systemctl --version
I0912 22:44:43.441222 1635456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209375
I0912 22:44:43.471947 1635456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/functional-209375/id_rsa Username:docker}
I0912 22:44:43.568552 1635456 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-209375 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/minikube-local-cache-test | functional-209375  | sha256:d9239a | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kicbase/echo-server               | functional-209375  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-209375 image ls --format table --alsologtostderr:
I0912 22:44:44.105608 1635658 out.go:345] Setting OutFile to fd 1 ...
I0912 22:44:44.105785 1635658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:44.105813 1635658 out.go:358] Setting ErrFile to fd 2...
I0912 22:44:44.105834 1635658 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:44.106106 1635658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
I0912 22:44:44.106779 1635658 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:44.106959 1635658 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:44.107524 1635658 cli_runner.go:164] Run: docker container inspect functional-209375 --format={{.State.Status}}
I0912 22:44:44.125196 1635658 ssh_runner.go:195] Run: systemctl --version
I0912 22:44:44.125248 1635658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209375
I0912 22:44:44.146657 1635658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/functional-209375/id_rsa Username:docker}
I0912 22:44:44.263459 1635658 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-209375 image ls --format json --alsologtostderr:
[{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"
size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-209375"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"4532467
5"},{"id":"sha256:d9239a8da0fa492140e98faf11203157623a4bcc56cec9f70af1b315e05a5a4a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-209375"],"size":"991"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84f
d6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":[
"docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-209375 image ls --format json --alsologtostderr:
I0912 22:44:43.817022 1635579 out.go:345] Setting OutFile to fd 1 ...
I0912 22:44:43.817221 1635579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.817232 1635579 out.go:358] Setting ErrFile to fd 2...
I0912 22:44:43.817236 1635579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.817487 1635579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
I0912 22:44:43.818110 1635579 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.818233 1635579 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.822814 1635579 cli_runner.go:164] Run: docker container inspect functional-209375 --format={{.State.Status}}
I0912 22:44:43.851347 1635579 ssh_runner.go:195] Run: systemctl --version
I0912 22:44:43.851415 1635579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209375
I0912 22:44:43.873517 1635579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/functional-209375/id_rsa Username:docker}
I0912 22:44:43.968822 1635579 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-209375 image ls --format yaml --alsologtostderr:
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:d9239a8da0fa492140e98faf11203157623a4bcc56cec9f70af1b315e05a5a4a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-209375
size: "991"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-209375
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-209375 image ls --format yaml --alsologtostderr:
I0912 22:44:43.515542 1635501 out.go:345] Setting OutFile to fd 1 ...
I0912 22:44:43.515829 1635501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.515845 1635501 out.go:358] Setting ErrFile to fd 2...
I0912 22:44:43.515850 1635501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.516233 1635501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
I0912 22:44:43.517186 1635501 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.517423 1635501 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.518173 1635501 cli_runner.go:164] Run: docker container inspect functional-209375 --format={{.State.Status}}
I0912 22:44:43.536875 1635501 ssh_runner.go:195] Run: systemctl --version
I0912 22:44:43.536933 1635501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209375
I0912 22:44:43.557121 1635501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/functional-209375/id_rsa Username:docker}
I0912 22:44:43.664224 1635501 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-209375 ssh pgrep buildkitd: exit status 1 (314.967828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image build -t localhost/my-image:functional-209375 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 image build -t localhost/my-image:functional-209375 testdata/build --alsologtostderr: (3.138280283s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-209375 image build -t localhost/my-image:functional-209375 testdata/build --alsologtostderr:
I0912 22:44:43.995774 1635630 out.go:345] Setting OutFile to fd 1 ...
I0912 22:44:43.996645 1635630 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.996653 1635630 out.go:358] Setting ErrFile to fd 2...
I0912 22:44:43.996658 1635630 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:44:43.996934 1635630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
I0912 22:44:43.997559 1635630 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.998662 1635630 config.go:182] Loaded profile config "functional-209375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0912 22:44:43.999161 1635630 cli_runner.go:164] Run: docker container inspect functional-209375 --format={{.State.Status}}
I0912 22:44:44.026353 1635630 ssh_runner.go:195] Run: systemctl --version
I0912 22:44:44.026411 1635630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-209375
I0912 22:44:44.048226 1635630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/functional-209375/id_rsa Username:docker}
I0912 22:44:44.144713 1635630 build_images.go:161] Building image from path: /tmp/build.736740568.tar
I0912 22:44:44.144851 1635630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 22:44:44.155655 1635630 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.736740568.tar
I0912 22:44:44.159682 1635630 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.736740568.tar: stat -c "%s %y" /var/lib/minikube/build/build.736740568.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.736740568.tar': No such file or directory
I0912 22:44:44.159741 1635630 ssh_runner.go:362] scp /tmp/build.736740568.tar --> /var/lib/minikube/build/build.736740568.tar (3072 bytes)
I0912 22:44:44.200338 1635630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.736740568
I0912 22:44:44.214912 1635630 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.736740568 -xf /var/lib/minikube/build/build.736740568.tar
I0912 22:44:44.225332 1635630 containerd.go:394] Building image: /var/lib/minikube/build/build.736740568
I0912 22:44:44.225404 1635630 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.736740568 --local dockerfile=/var/lib/minikube/build/build.736740568 --output type=image,name=localhost/my-image:functional-209375
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:03ced4848de94f54b37bfc56c88d2a92de0ea8f9cc2ba47baabd7e075633dc37
#8 exporting manifest sha256:03ced4848de94f54b37bfc56c88d2a92de0ea8f9cc2ba47baabd7e075633dc37 0.0s done
#8 exporting config sha256:ddb7c73fad1b34da79fec0364cc88f15ab3870fa03ae6d0c8972e6e2d87ce023 0.0s done
#8 naming to localhost/my-image:functional-209375 done
#8 DONE 0.1s
I0912 22:44:47.043762 1635630 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.736740568 --local dockerfile=/var/lib/minikube/build/build.736740568 --output type=image,name=localhost/my-image:functional-209375: (2.818327878s)
I0912 22:44:47.043829 1635630 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.736740568
I0912 22:44:47.053560 1635630 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.736740568.tar
I0912 22:44:47.066650 1635630 build_images.go:217] Built localhost/my-image:functional-209375 from /tmp/build.736740568.tar
I0912 22:44:47.066679 1635630 build_images.go:133] succeeded building to: functional-209375
I0912 22:44:47.066684 1635630 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-209375
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr: (1.203727576s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr: (1.031933495s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 update-context --alsologtostderr -v=2
E0912 22:44:43.162369 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-209375
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-209375 image load --daemon kicbase/echo-server:functional-209375 --alsologtostderr: (1.085615489s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image save kicbase/echo-server:functional-209375 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image rm kicbase/echo-server:functional-209375 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-209375
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-209375 image save --daemon kicbase/echo-server:functional-209375 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-209375
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-209375
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-209375
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-209375
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-638724 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0912 22:45:24.123841 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:46:46.046086 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-638724 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.652548233s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-638724 -- rollout status deployment/busybox: (30.269816725s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-bjgzv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-mx82g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-zdq9f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-bjgzv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-mx82g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-zdq9f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-bjgzv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-mx82g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-zdq9f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-bjgzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-bjgzv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-mx82g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-mx82g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-zdq9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-638724 -- exec busybox-7dff88458-zdq9f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-638724 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-638724 -v=7 --alsologtostderr: (20.319814722s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr: (1.039422682s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-638724 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.180629458s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 status --output json -v=7 --alsologtostderr: (1.010151533s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp testdata/cp-test.txt ha-638724:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile847068031/001/cp-test_ha-638724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724:/home/docker/cp-test.txt ha-638724-m02:/home/docker/cp-test_ha-638724_ha-638724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test_ha-638724_ha-638724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724:/home/docker/cp-test.txt ha-638724-m03:/home/docker/cp-test_ha-638724_ha-638724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test_ha-638724_ha-638724-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724:/home/docker/cp-test.txt ha-638724-m04:/home/docker/cp-test_ha-638724_ha-638724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test_ha-638724_ha-638724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp testdata/cp-test.txt ha-638724-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile847068031/001/cp-test_ha-638724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m02:/home/docker/cp-test.txt ha-638724:/home/docker/cp-test_ha-638724-m02_ha-638724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test_ha-638724-m02_ha-638724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m02:/home/docker/cp-test.txt ha-638724-m03:/home/docker/cp-test_ha-638724-m02_ha-638724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test_ha-638724-m02_ha-638724-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m02:/home/docker/cp-test.txt ha-638724-m04:/home/docker/cp-test_ha-638724-m02_ha-638724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test_ha-638724-m02_ha-638724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp testdata/cp-test.txt ha-638724-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile847068031/001/cp-test_ha-638724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m03:/home/docker/cp-test.txt ha-638724:/home/docker/cp-test_ha-638724-m03_ha-638724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test_ha-638724-m03_ha-638724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m03:/home/docker/cp-test.txt ha-638724-m02:/home/docker/cp-test_ha-638724-m03_ha-638724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test_ha-638724-m03_ha-638724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m03:/home/docker/cp-test.txt ha-638724-m04:/home/docker/cp-test_ha-638724-m03_ha-638724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test_ha-638724-m03_ha-638724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp testdata/cp-test.txt ha-638724-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile847068031/001/cp-test_ha-638724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m04:/home/docker/cp-test.txt ha-638724:/home/docker/cp-test_ha-638724-m04_ha-638724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724 "sudo cat /home/docker/cp-test_ha-638724-m04_ha-638724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m04:/home/docker/cp-test.txt ha-638724-m02:/home/docker/cp-test_ha-638724-m04_ha-638724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m02 "sudo cat /home/docker/cp-test_ha-638724-m04_ha-638724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 cp ha-638724-m04:/home/docker/cp-test.txt ha-638724-m03:/home/docker/cp-test_ha-638724-m04_ha-638724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 ssh -n ha-638724-m03 "sudo cat /home/docker/cp-test_ha-638724-m04_ha-638724-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 node stop m02 -v=7 --alsologtostderr: (12.079066187s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr: exit status 7 (744.832423ms)

                                                
                                                
-- stdout --
	ha-638724
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-638724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-638724-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-638724-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:48:32.380958 1651950 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:48:32.381167 1651950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:48:32.381185 1651950 out.go:358] Setting ErrFile to fd 2...
	I0912 22:48:32.381191 1651950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:48:32.381446 1651950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:48:32.381643 1651950 out.go:352] Setting JSON to false
	I0912 22:48:32.381671 1651950 mustload.go:65] Loading cluster: ha-638724
	I0912 22:48:32.381747 1651950 notify.go:220] Checking for updates...
	I0912 22:48:32.382156 1651950 config.go:182] Loaded profile config "ha-638724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:48:32.382272 1651950 status.go:255] checking status of ha-638724 ...
	I0912 22:48:32.382806 1651950 cli_runner.go:164] Run: docker container inspect ha-638724 --format={{.State.Status}}
	I0912 22:48:32.408663 1651950 status.go:330] ha-638724 host status = "Running" (err=<nil>)
	I0912 22:48:32.408690 1651950 host.go:66] Checking if "ha-638724" exists ...
	I0912 22:48:32.408985 1651950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-638724
	I0912 22:48:32.428531 1651950 host.go:66] Checking if "ha-638724" exists ...
	I0912 22:48:32.429035 1651950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:48:32.429109 1651950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-638724
	I0912 22:48:32.450466 1651950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/ha-638724/id_rsa Username:docker}
	I0912 22:48:32.553114 1651950 ssh_runner.go:195] Run: systemctl --version
	I0912 22:48:32.557325 1651950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:48:32.568503 1651950 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:48:32.627997 1651950 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-12 22:48:32.615967607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:48:32.628578 1651950 kubeconfig.go:125] found "ha-638724" server: "https://192.168.49.254:8443"
	I0912 22:48:32.628609 1651950 api_server.go:166] Checking apiserver status ...
	I0912 22:48:32.628667 1651950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:48:32.640088 1651950 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	I0912 22:48:32.649642 1651950 api_server.go:182] apiserver freezer: "10:freezer:/docker/424709c1d64c244c096b81cb1817c470e0ebd001e5f87788aeb1eabb486c4cc0/kubepods/burstable/pod2899c32664e3014c121d062d35bc1916/d9a92055bebd9fdac01e64cdd1149809d0a1de30de448e477ee955e13417e22f"
	I0912 22:48:32.649728 1651950 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/424709c1d64c244c096b81cb1817c470e0ebd001e5f87788aeb1eabb486c4cc0/kubepods/burstable/pod2899c32664e3014c121d062d35bc1916/d9a92055bebd9fdac01e64cdd1149809d0a1de30de448e477ee955e13417e22f/freezer.state
	I0912 22:48:32.659000 1651950 api_server.go:204] freezer state: "THAWED"
	I0912 22:48:32.659034 1651950 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 22:48:32.667035 1651950 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 22:48:32.667076 1651950 status.go:422] ha-638724 apiserver status = Running (err=<nil>)
	I0912 22:48:32.667088 1651950 status.go:257] ha-638724 status: &{Name:ha-638724 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:48:32.667117 1651950 status.go:255] checking status of ha-638724-m02 ...
	I0912 22:48:32.667466 1651950 cli_runner.go:164] Run: docker container inspect ha-638724-m02 --format={{.State.Status}}
	I0912 22:48:32.684354 1651950 status.go:330] ha-638724-m02 host status = "Stopped" (err=<nil>)
	I0912 22:48:32.684382 1651950 status.go:343] host is not running, skipping remaining checks
	I0912 22:48:32.684389 1651950 status.go:257] ha-638724-m02 status: &{Name:ha-638724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:48:32.684408 1651950 status.go:255] checking status of ha-638724-m03 ...
	I0912 22:48:32.684738 1651950 cli_runner.go:164] Run: docker container inspect ha-638724-m03 --format={{.State.Status}}
	I0912 22:48:32.700971 1651950 status.go:330] ha-638724-m03 host status = "Running" (err=<nil>)
	I0912 22:48:32.700998 1651950 host.go:66] Checking if "ha-638724-m03" exists ...
	I0912 22:48:32.701310 1651950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-638724-m03
	I0912 22:48:32.720205 1651950 host.go:66] Checking if "ha-638724-m03" exists ...
	I0912 22:48:32.720695 1651950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:48:32.720774 1651950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-638724-m03
	I0912 22:48:32.744151 1651950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34669 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/ha-638724-m03/id_rsa Username:docker}
	I0912 22:48:32.841712 1651950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:48:32.854492 1651950 kubeconfig.go:125] found "ha-638724" server: "https://192.168.49.254:8443"
	I0912 22:48:32.854524 1651950 api_server.go:166] Checking apiserver status ...
	I0912 22:48:32.854567 1651950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:48:32.866018 1651950 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1323/cgroup
	I0912 22:48:32.875614 1651950 api_server.go:182] apiserver freezer: "10:freezer:/docker/de98743e36cb640201526b8d9c74538572e09b240ecc0971959c75310a3c7987/kubepods/burstable/pod71c7bc5a8d962e90345639c9cfe3e0c9/4ea5e0d9fc6a5360a934734a101566f16cca78996a7033755906d1d9a9e1703b"
	I0912 22:48:32.875689 1651950 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/de98743e36cb640201526b8d9c74538572e09b240ecc0971959c75310a3c7987/kubepods/burstable/pod71c7bc5a8d962e90345639c9cfe3e0c9/4ea5e0d9fc6a5360a934734a101566f16cca78996a7033755906d1d9a9e1703b/freezer.state
	I0912 22:48:32.884701 1651950 api_server.go:204] freezer state: "THAWED"
	I0912 22:48:32.884769 1651950 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 22:48:32.892650 1651950 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 22:48:32.892725 1651950 status.go:422] ha-638724-m03 apiserver status = Running (err=<nil>)
	I0912 22:48:32.892748 1651950 status.go:257] ha-638724-m03 status: &{Name:ha-638724-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:48:32.892793 1651950 status.go:255] checking status of ha-638724-m04 ...
	I0912 22:48:32.893143 1651950 cli_runner.go:164] Run: docker container inspect ha-638724-m04 --format={{.State.Status}}
	I0912 22:48:32.910317 1651950 status.go:330] ha-638724-m04 host status = "Running" (err=<nil>)
	I0912 22:48:32.910343 1651950 host.go:66] Checking if "ha-638724-m04" exists ...
	I0912 22:48:32.910716 1651950 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-638724-m04
	I0912 22:48:32.934614 1651950 host.go:66] Checking if "ha-638724-m04" exists ...
	I0912 22:48:32.934972 1651950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:48:32.935026 1651950 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-638724-m04
	I0912 22:48:32.960239 1651950 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34674 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/ha-638724-m04/id_rsa Username:docker}
	I0912 22:48:33.061395 1651950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:48:33.076311 1651950 status.go:257] ha-638724-m04 status: &{Name:ha-638724-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 node start m02 -v=7 --alsologtostderr
E0912 22:48:55.444375 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.450777 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.462227 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.483765 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.525262 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.606606 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:55.768163 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:56.090217 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:56.732134 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:48:58.015004 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:00.576830 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:02.182250 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 node start m02 -v=7 --alsologtostderr: (28.866144062s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr: (1.138304934s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-638724 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-638724 -v=7 --alsologtostderr
E0912 22:49:05.698195 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:15.940486 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:29.887927 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:36.421919 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-638724 -v=7 --alsologtostderr: (37.025170454s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-638724 --wait=true -v=7 --alsologtostderr
E0912 22:50:17.383864 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-638724 --wait=true -v=7 --alsologtostderr: (1m40.211519841s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-638724
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (137.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 node delete m03 -v=7 --alsologtostderr: (9.52501532s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 stop -v=7 --alsologtostderr
E0912 22:51:39.309770 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 stop -v=7 --alsologtostderr: (35.884706459s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr: exit status 7 (113.110565ms)

                                                
                                                
-- stdout --
	ha-638724
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-638724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-638724-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:52:08.845919 1666255 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:52:08.846313 1666255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:08.846326 1666255 out.go:358] Setting ErrFile to fd 2...
	I0912 22:52:08.846331 1666255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:52:08.846578 1666255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 22:52:08.846794 1666255 out.go:352] Setting JSON to false
	I0912 22:52:08.846826 1666255 mustload.go:65] Loading cluster: ha-638724
	I0912 22:52:08.846895 1666255 notify.go:220] Checking for updates...
	I0912 22:52:08.848160 1666255 config.go:182] Loaded profile config "ha-638724": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 22:52:08.848187 1666255 status.go:255] checking status of ha-638724 ...
	I0912 22:52:08.848687 1666255 cli_runner.go:164] Run: docker container inspect ha-638724 --format={{.State.Status}}
	I0912 22:52:08.864222 1666255 status.go:330] ha-638724 host status = "Stopped" (err=<nil>)
	I0912 22:52:08.864247 1666255 status.go:343] host is not running, skipping remaining checks
	I0912 22:52:08.864254 1666255 status.go:257] ha-638724 status: &{Name:ha-638724 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:52:08.864282 1666255 status.go:255] checking status of ha-638724-m02 ...
	I0912 22:52:08.864586 1666255 cli_runner.go:164] Run: docker container inspect ha-638724-m02 --format={{.State.Status}}
	I0912 22:52:08.881896 1666255 status.go:330] ha-638724-m02 host status = "Stopped" (err=<nil>)
	I0912 22:52:08.881920 1666255 status.go:343] host is not running, skipping remaining checks
	I0912 22:52:08.881927 1666255 status.go:257] ha-638724-m02 status: &{Name:ha-638724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:52:08.881967 1666255 status.go:255] checking status of ha-638724-m04 ...
	I0912 22:52:08.882270 1666255 cli_runner.go:164] Run: docker container inspect ha-638724-m04 --format={{.State.Status}}
	I0912 22:52:08.911618 1666255 status.go:330] ha-638724-m04 host status = "Stopped" (err=<nil>)
	I0912 22:52:08.911638 1666255 status.go:343] host is not running, skipping remaining checks
	I0912 22:52:08.911647 1666255 status.go:257] ha-638724-m04 status: &{Name:ha-638724-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-638724 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-638724 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.346926886s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-638724 --control-plane -v=7 --alsologtostderr
E0912 22:53:55.443874 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:02.181981 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-638724 --control-plane -v=7 --alsologtostderr: (43.390328977s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-638724 status -v=7 --alsologtostderr: (1.046238124s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-542308 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0912 22:54:23.152017 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-542308 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.293270805s)
--- PASS: TestJSONOutput/start/Command (50.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-542308 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-542308 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-542308 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-542308 --output=json --user=testUser: (5.807337467s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-180533 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-180533 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.58393ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e7b542d6-805c-4dd4-a54c-14341e5836b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-180533] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"55816a86-6f3b-41c2-84a5-7e093a180e42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"c3d497b0-3650-40ac-84cc-09df0f7c7af8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"96eb2a6d-e87d-4d1c-8537-699f26520f70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig"}}
	{"specversion":"1.0","id":"b2b68149-a01d-4898-a7b7-e74990babe31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube"}}
	{"specversion":"1.0","id":"3da30801-29eb-4dd8-98e1-88f441f271d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0811d9e-1f2a-44f4-afcc-cb32f06ab4ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"09e35cbc-9853-4852-961e-e4f83575fdcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-180533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-180533
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-678766 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-678766 --network=: (37.922230832s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-678766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-678766
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-678766: (1.968476832s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-064321 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-064321 --network=bridge: (32.171244024s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-064321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-064321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-064321: (2.005485373s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.20s)

                                                
                                    
x
+
TestKicExistingNetwork (32.58s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-580756 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-580756 --network=existing-network: (30.517191851s)
helpers_test.go:175: Cleaning up "existing-network-580756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-580756
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-580756: (1.912679798s)
--- PASS: TestKicExistingNetwork (32.58s)

                                                
                                    
x
+
TestKicCustomSubnet (35.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-196803 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-196803 --subnet=192.168.60.0/24: (33.126526381s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-196803 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-196803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-196803
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-196803: (2.072412621s)
--- PASS: TestKicCustomSubnet (35.23s)

                                                
                                    
x
+
TestKicStaticIP (31.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-325158 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-325158 --static-ip=192.168.200.200: (29.385955223s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-325158 ip
helpers_test.go:175: Cleaning up "static-ip-325158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-325158
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-325158: (1.969161065s)
--- PASS: TestKicStaticIP (31.51s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.09s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-658058 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-658058 --driver=docker  --container-runtime=containerd: (28.896010788s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-661061 --driver=docker  --container-runtime=containerd
E0912 22:58:55.443758 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:02.182002 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-661061 --driver=docker  --container-runtime=containerd: (32.923987207s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-658058
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-661061
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-661061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-661061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-661061: (2.059150901s)
helpers_test.go:175: Cleaning up "first-658058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-658058
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-658058: (1.923465759s)
--- PASS: TestMinikubeProfile (67.09s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-219773 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-219773 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.911470759s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-219773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-233377 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-233377 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.923584713s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-233377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-219773 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-219773 --alsologtostderr -v=5: (1.607385179s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-233377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-233377
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-233377: (1.223553459s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-233377
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-233377: (6.594294995s)
--- PASS: TestMountStart/serial/RestartStopped (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-233377 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814052 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0912 23:00:25.249607 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814052 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.391750719s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-814052 -- rollout status deployment/busybox: (15.381255091s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-6xkkv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-m525b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-6xkkv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-m525b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-6xkkv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-m525b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-6xkkv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-6xkkv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-m525b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-814052 -- exec busybox-7dff88458-m525b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-814052 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-814052 -v 3 --alsologtostderr: (18.073727025s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-814052 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp testdata/cp-test.txt multinode-814052:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2884342008/001/cp-test_multinode-814052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052:/home/docker/cp-test.txt multinode-814052-m02:/home/docker/cp-test_multinode-814052_multinode-814052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test_multinode-814052_multinode-814052-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052:/home/docker/cp-test.txt multinode-814052-m03:/home/docker/cp-test_multinode-814052_multinode-814052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test_multinode-814052_multinode-814052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp testdata/cp-test.txt multinode-814052-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2884342008/001/cp-test_multinode-814052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m02:/home/docker/cp-test.txt multinode-814052:/home/docker/cp-test_multinode-814052-m02_multinode-814052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test_multinode-814052-m02_multinode-814052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m02:/home/docker/cp-test.txt multinode-814052-m03:/home/docker/cp-test_multinode-814052-m02_multinode-814052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test_multinode-814052-m02_multinode-814052-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp testdata/cp-test.txt multinode-814052-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2884342008/001/cp-test_multinode-814052-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m03:/home/docker/cp-test.txt multinode-814052:/home/docker/cp-test_multinode-814052-m03_multinode-814052.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052 "sudo cat /home/docker/cp-test_multinode-814052-m03_multinode-814052.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 cp multinode-814052-m03:/home/docker/cp-test.txt multinode-814052-m02:/home/docker/cp-test_multinode-814052-m03_multinode-814052-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 ssh -n multinode-814052-m02 "sudo cat /home/docker/cp-test_multinode-814052-m03_multinode-814052-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-814052 node stop m03: (1.22147295s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814052 status: exit status 7 (523.235989ms)

                                                
                                                
-- stdout --
	multinode-814052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814052-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814052-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr: exit status 7 (519.858548ms)

                                                
                                                
-- stdout --
	multinode-814052
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-814052-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-814052-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 23:01:48.323069 1719585 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:01:48.323264 1719585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:01:48.323334 1719585 out.go:358] Setting ErrFile to fd 2...
	I0912 23:01:48.323355 1719585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:01:48.323641 1719585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 23:01:48.323905 1719585 out.go:352] Setting JSON to false
	I0912 23:01:48.323963 1719585 mustload.go:65] Loading cluster: multinode-814052
	I0912 23:01:48.324019 1719585 notify.go:220] Checking for updates...
	I0912 23:01:48.324467 1719585 config.go:182] Loaded profile config "multinode-814052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:01:48.324502 1719585 status.go:255] checking status of multinode-814052 ...
	I0912 23:01:48.325286 1719585 cli_runner.go:164] Run: docker container inspect multinode-814052 --format={{.State.Status}}
	I0912 23:01:48.347932 1719585 status.go:330] multinode-814052 host status = "Running" (err=<nil>)
	I0912 23:01:48.347966 1719585 host.go:66] Checking if "multinode-814052" exists ...
	I0912 23:01:48.348267 1719585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814052
	I0912 23:01:48.375787 1719585 host.go:66] Checking if "multinode-814052" exists ...
	I0912 23:01:48.376135 1719585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 23:01:48.376214 1719585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814052
	I0912 23:01:48.400982 1719585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34779 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/multinode-814052/id_rsa Username:docker}
	I0912 23:01:48.497511 1719585 ssh_runner.go:195] Run: systemctl --version
	I0912 23:01:48.501927 1719585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:01:48.513701 1719585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:01:48.570946 1719585 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-12 23:01:48.560522427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:01:48.571546 1719585 kubeconfig.go:125] found "multinode-814052" server: "https://192.168.67.2:8443"
	I0912 23:01:48.571585 1719585 api_server.go:166] Checking apiserver status ...
	I0912 23:01:48.571629 1719585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 23:01:48.584226 1719585 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1352/cgroup
	I0912 23:01:48.594066 1719585 api_server.go:182] apiserver freezer: "10:freezer:/docker/8c2e44def7a311849b6f7e105fb6830d197b78019713e824476b8d1245b3ef7b/kubepods/burstable/pod1cf9453e48ad065cb9538464e50c6a10/97e81cfe4837c1914182adac450821ac8232c6868c74f1358d3b2b6feef3b918"
	I0912 23:01:48.594159 1719585 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c2e44def7a311849b6f7e105fb6830d197b78019713e824476b8d1245b3ef7b/kubepods/burstable/pod1cf9453e48ad065cb9538464e50c6a10/97e81cfe4837c1914182adac450821ac8232c6868c74f1358d3b2b6feef3b918/freezer.state
	I0912 23:01:48.602971 1719585 api_server.go:204] freezer state: "THAWED"
	I0912 23:01:48.603002 1719585 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0912 23:01:48.611015 1719585 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0912 23:01:48.611052 1719585 status.go:422] multinode-814052 apiserver status = Running (err=<nil>)
	I0912 23:01:48.611063 1719585 status.go:257] multinode-814052 status: &{Name:multinode-814052 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 23:01:48.611091 1719585 status.go:255] checking status of multinode-814052-m02 ...
	I0912 23:01:48.611454 1719585 cli_runner.go:164] Run: docker container inspect multinode-814052-m02 --format={{.State.Status}}
	I0912 23:01:48.628502 1719585 status.go:330] multinode-814052-m02 host status = "Running" (err=<nil>)
	I0912 23:01:48.628527 1719585 host.go:66] Checking if "multinode-814052-m02" exists ...
	I0912 23:01:48.628838 1719585 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-814052-m02
	I0912 23:01:48.646289 1719585 host.go:66] Checking if "multinode-814052-m02" exists ...
	I0912 23:01:48.646657 1719585 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 23:01:48.646704 1719585 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-814052-m02
	I0912 23:01:48.663814 1719585 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34784 SSHKeyPath:/home/jenkins/minikube-integration/19616-1592376/.minikube/machines/multinode-814052-m02/id_rsa Username:docker}
	I0912 23:01:48.760892 1719585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 23:01:48.772772 1719585 status.go:257] multinode-814052-m02 status: &{Name:multinode-814052-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 23:01:48.772805 1719585 status.go:255] checking status of multinode-814052-m03 ...
	I0912 23:01:48.773136 1719585 cli_runner.go:164] Run: docker container inspect multinode-814052-m03 --format={{.State.Status}}
	I0912 23:01:48.791342 1719585 status.go:330] multinode-814052-m03 host status = "Stopped" (err=<nil>)
	I0912 23:01:48.791364 1719585 status.go:343] host is not running, skipping remaining checks
	I0912 23:01:48.791373 1719585 status.go:257] multinode-814052-m03 status: &{Name:multinode-814052-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-814052 node start m03 -v=7 --alsologtostderr: (8.767854547s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814052
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-814052
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-814052: (24.922059346s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814052 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814052 --wait=true -v=8 --alsologtostderr: (1m14.642778436s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814052
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-814052 node delete m03: (4.928387901s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 stop
E0912 23:03:55.444754 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:02.181706 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-814052 stop: (23.927588705s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814052 status: exit status 7 (92.360684ms)

                                                
                                                
-- stdout --
	multinode-814052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814052-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr: exit status 7 (86.350333ms)

                                                
                                                
-- stdout --
	multinode-814052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-814052-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 23:04:07.634110 1728031 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:04:07.634480 1728031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:04:07.634488 1728031 out.go:358] Setting ErrFile to fd 2...
	I0912 23:04:07.634494 1728031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:04:07.634741 1728031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 23:04:07.634930 1728031 out.go:352] Setting JSON to false
	I0912 23:04:07.634943 1728031 mustload.go:65] Loading cluster: multinode-814052
	I0912 23:04:07.635332 1728031 config.go:182] Loaded profile config "multinode-814052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:04:07.635342 1728031 status.go:255] checking status of multinode-814052 ...
	I0912 23:04:07.635868 1728031 notify.go:220] Checking for updates...
	I0912 23:04:07.636020 1728031 cli_runner.go:164] Run: docker container inspect multinode-814052 --format={{.State.Status}}
	I0912 23:04:07.654443 1728031 status.go:330] multinode-814052 host status = "Stopped" (err=<nil>)
	I0912 23:04:07.654465 1728031 status.go:343] host is not running, skipping remaining checks
	I0912 23:04:07.654473 1728031 status.go:257] multinode-814052 status: &{Name:multinode-814052 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 23:04:07.654510 1728031 status.go:255] checking status of multinode-814052-m02 ...
	I0912 23:04:07.654832 1728031 cli_runner.go:164] Run: docker container inspect multinode-814052-m02 --format={{.State.Status}}
	I0912 23:04:07.677349 1728031 status.go:330] multinode-814052-m02 host status = "Stopped" (err=<nil>)
	I0912 23:04:07.677371 1728031 status.go:343] host is not running, skipping remaining checks
	I0912 23:04:07.677378 1728031 status.go:257] multinode-814052-m02 status: &{Name:multinode-814052-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814052 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814052 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.665855153s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-814052 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-814052
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814052-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-814052-m02 --driver=docker  --container-runtime=containerd: exit status 14 (140.265666ms)

                                                
                                                
-- stdout --
	* [multinode-814052-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-814052-m02' is duplicated with machine name 'multinode-814052-m02' in profile 'multinode-814052'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-814052-m03 --driver=docker  --container-runtime=containerd
E0912 23:05:18.515863 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-814052-m03 --driver=docker  --container-runtime=containerd: (31.652340208s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-814052
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-814052: exit status 80 (331.689646ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-814052 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-814052-m03 already exists in multinode-814052-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-814052-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-814052-m03: (1.939167342s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.27s)

                                                
                                    
x
+
TestPreload (112.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-255554 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-255554 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.357809115s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-255554 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-255554 image pull gcr.io/k8s-minikube/busybox: (1.963385844s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-255554
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-255554: (12.060957115s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-255554 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-255554 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.196245369s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-255554 image list
helpers_test.go:175: Cleaning up "test-preload-255554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-255554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-255554: (2.504514156s)
--- PASS: TestPreload (112.53s)

                                                
                                    
x
+
TestScheduledStopUnix (108.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-589269 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-589269 --memory=2048 --driver=docker  --container-runtime=containerd: (31.375246136s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-589269 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-589269 -n scheduled-stop-589269
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-589269 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-589269 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-589269 -n scheduled-stop-589269
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-589269
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-589269 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0912 23:08:55.443690 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:09:02.181725 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-589269
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-589269: exit status 7 (67.567494ms)

                                                
                                                
-- stdout --
	scheduled-stop-589269
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-589269 -n scheduled-stop-589269
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-589269 -n scheduled-stop-589269: exit status 7 (66.297066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-589269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-589269
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-589269: (5.589841478s)
--- PASS: TestScheduledStopUnix (108.56s)

                                                
                                    
x
+
TestInsufficientStorage (10.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-462691 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-462691 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.630914141s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7712841b-6dee-43fd-9a91-1fa8d15b5b0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-462691] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"51bbd689-7b5e-405a-8921-62ac12ec7e69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"36181429-f87a-4527-9f3a-a2d6966042b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e7d535f-f89b-4376-ad09-f41e54df5af6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig"}}
	{"specversion":"1.0","id":"a3ef8a2a-dd68-40dc-852c-7512b2fc40f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube"}}
	{"specversion":"1.0","id":"4e78a328-cbc8-46ca-92ed-67ef31aa38e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"19b0635c-40da-42f7-aa23-d5958dff672b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68480053-21c9-4f55-96e1-b8e45f88f496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1fd5c2a4-1050-4b65-aab3-a55a4d3df719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"491e065a-1def-4191-8874-92bb3903d8b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"714a8398-6383-47a3-9511-74dbbe96b243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"63d0e59a-a963-4871-917e-404105e0616b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-462691\" primary control-plane node in \"insufficient-storage-462691\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"269f5283-9f9c-47a2-870f-6f6c652b9c8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726156396-19616 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dbf8832-4ead-4532-8805-ce142ae5aa9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"36d43be8-35c7-462b-8f41-2d86067e8841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-462691 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-462691 --output=json --layout=cluster: exit status 7 (286.031128ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-462691","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-462691","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 23:09:27.377728 1746597 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-462691" does not appear in /home/jenkins/minikube-integration/19616-1592376/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-462691 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-462691 --output=json --layout=cluster: exit status 7 (288.304957ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-462691","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-462691","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 23:09:27.665366 1746659 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-462691" does not appear in /home/jenkins/minikube-integration/19616-1592376/kubeconfig
	E0912 23:09:27.675439 1746659 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/insufficient-storage-462691/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-462691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-462691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-462691: (1.874177623s)
--- PASS: TestInsufficientStorage (10.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0912 23:14:02.181696 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2091880829 start -p running-upgrade-977707 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2091880829 start -p running-upgrade-977707 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.951538663s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-977707 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-977707 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.891151483s)
helpers_test.go:175: Cleaning up "running-upgrade-977707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-977707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-977707: (2.750322869s)
--- PASS: TestRunningBinaryUpgrade (88.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.214396777s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-267237
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-267237: (1.368602808s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-267237 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-267237 status --format={{.Host}}: exit status 7 (68.880711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.832287136s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-267237 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (97.259738ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-267237] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-267237
	    minikube start -p kubernetes-upgrade-267237 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2672372 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-267237 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-267237 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.528974222s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-267237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-267237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-267237: (2.113467128s)
--- PASS: TestKubernetesUpgrade (349.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3120902830 start -p missing-upgrade-761026 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3120902830 start -p missing-upgrade-761026 --memory=2200 --driver=docker  --container-runtime=containerd: (1m30.782264927s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-761026
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-761026: (10.297228742s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-761026
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-761026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-761026 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.342795201s)
helpers_test.go:175: Cleaning up "missing-upgrade-761026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-761026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-761026: (2.263597664s)
--- PASS: TestMissingContainerUpgrade (170.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (77.518991ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-957744] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957744 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957744 --driver=docker  --container-runtime=containerd: (40.275501158s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-957744 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.197898568s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-957744 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-957744 status -o json: exit status 2 (287.30093ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-957744","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-957744
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-957744: (1.898242333s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957744 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.585649372s)
--- PASS: TestNoKubernetes/serial/Start (6.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-957744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-957744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.978748ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-957744
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-957744: (1.271474705s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957744 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957744 --driver=docker  --container-runtime=containerd: (7.605572422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-957744 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-957744 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.625907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2108759762 start -p stopped-upgrade-290630 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2108759762 start -p stopped-upgrade-290630 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (41.609529233s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2108759762 -p stopped-upgrade-290630 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2108759762 -p stopped-upgrade-290630 stop: (19.971754612s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-290630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0912 23:13:55.444159 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-290630 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.067106515s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-290630
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-290630: (1.00937751s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestPause/serial/Start (96.09s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-693262 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-693262 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m36.087194268s)
--- PASS: TestPause/serial/Start (96.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-541309 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-541309 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (256.766436ms)

                                                
                                                
-- stdout --
	* [false-541309] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 23:17:05.771981 1786936 out.go:345] Setting OutFile to fd 1 ...
	I0912 23:17:05.772168 1786936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:17:05.772182 1786936 out.go:358] Setting ErrFile to fd 2...
	I0912 23:17:05.772189 1786936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 23:17:05.772475 1786936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1592376/.minikube/bin
	I0912 23:17:05.772957 1786936 out.go:352] Setting JSON to false
	I0912 23:17:05.774087 1786936 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28753,"bootTime":1726154273,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0912 23:17:05.774648 1786936 start.go:139] virtualization:  
	I0912 23:17:05.777355 1786936 out.go:177] * [false-541309] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 23:17:05.779853 1786936 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 23:17:05.780357 1786936 notify.go:220] Checking for updates...
	I0912 23:17:05.783888 1786936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 23:17:05.785918 1786936 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1592376/kubeconfig
	I0912 23:17:05.791315 1786936 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1592376/.minikube
	I0912 23:17:05.799950 1786936 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 23:17:05.802180 1786936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 23:17:05.804927 1786936 config.go:182] Loaded profile config "pause-693262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0912 23:17:05.805089 1786936 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 23:17:05.857564 1786936 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 23:17:05.857716 1786936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 23:17:05.938145 1786936 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-12 23:17:05.927000814 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 23:17:05.938258 1786936 docker.go:318] overlay module found
	I0912 23:17:05.940148 1786936 out.go:177] * Using the docker driver based on user configuration
	I0912 23:17:05.941903 1786936 start.go:297] selected driver: docker
	I0912 23:17:05.941931 1786936 start.go:901] validating driver "docker" against <nil>
	I0912 23:17:05.941945 1786936 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 23:17:05.944236 1786936 out.go:201] 
	W0912 23:17:05.947793 1786936 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0912 23:17:05.949416 1786936 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-541309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:16:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-693262
contexts:
- context:
cluster: pause-693262
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:16:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-693262
name: pause-693262
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-693262
user:
client-certificate: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.crt
client-key: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-541309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-541309"

                                                
                                                
----------------------- debugLogs end: false-541309 [took: 4.448675958s] --------------------------------
helpers_test.go:175: Cleaning up "false-541309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-541309
--- PASS: TestNetworkPlugins/group/false (4.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.28s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-693262 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-693262 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.264823341s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.28s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-693262 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-693262 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-693262 --output=json --layout=cluster: exit status 2 (367.220203ms)

                                                
                                                
-- stdout --
	{"Name":"pause-693262","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-693262","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-693262 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-693262 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-693262 --alsologtostderr -v=5: (1.10707839s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.55s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-693262 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-693262 --alsologtostderr -v=5: (3.550705195s)
--- PASS: TestPause/serial/DeletePaused (3.55s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-693262
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-693262: exit status 1 (19.884834ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-693262: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-011723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0912 23:18:55.443620 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:19:02.181591 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-011723 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m44.05216122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-011723 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a260c8b-3e99-476d-bb2a-f42a54017c50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a260c8b-3e99-476d-bb2a-f42a54017c50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003979498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-011723 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-693555 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-693555 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m13.213581503s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-011723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-011723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.279898358s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-011723 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-011723 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-011723 --alsologtostderr -v=3: (12.320632437s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-011723 -n old-k8s-version-011723
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-011723 -n old-k8s-version-011723: exit status 7 (91.955099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-011723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-693555 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c7c6524b-7246-43f5-b485-a4f2b723da5f] Pending
helpers_test.go:344: "busybox" [c7c6524b-7246-43f5-b485-a4f2b723da5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c7c6524b-7246-43f5-b485-a4f2b723da5f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003886452s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-693555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-693555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-693555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07318869s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-693555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-693555 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-693555 --alsologtostderr -v=3: (12.081811571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693555 -n no-preload-693555
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693555 -n no-preload-693555: exit status 7 (76.238606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-693555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (290.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-693555 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0912 23:23:55.444571 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:24:02.181434 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-693555 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m49.785832472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693555 -n no-preload-693555
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (290.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d59hf" [21a859c0-1ecc-435d-a23e-5f74c15efca5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004887949s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-d59hf" [21a859c0-1ecc-435d-a23e-5f74c15efca5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003778841s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-693555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-693555 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-693555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693555 -n no-preload-693555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693555 -n no-preload-693555: exit status 2 (317.757544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693555 -n no-preload-693555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693555 -n no-preload-693555: exit status 2 (324.310392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-693555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693555 -n no-preload-693555
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693555 -n no-preload-693555
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-945836 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-945836 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m33.086952946s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nbkcq" [5c4b461c-1cbb-429d-8376-1b1dbdfb1853] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003414184s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nbkcq" [5c4b461c-1cbb-429d-8376-1b1dbdfb1853] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006095581s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-011723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-011723 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-011723 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-011723 --alsologtostderr -v=1: (1.30499698s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-011723 -n old-k8s-version-011723
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-011723 -n old-k8s-version-011723: exit status 2 (383.219079ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-011723 -n old-k8s-version-011723
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-011723 -n old-k8s-version-011723: exit status 2 (467.007696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-011723 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-011723 -n old-k8s-version-011723
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-011723 -n old-k8s-version-011723
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-126787 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0912 23:28:55.444221 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:29:02.181554 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-126787 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m24.596209188s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-945836 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87c7b611-73be-4549-86f0-605faa539c7f] Pending
helpers_test.go:344: "busybox" [87c7b611-73be-4549-86f0-605faa539c7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [87c7b611-73be-4549-86f0-605faa539c7f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005070193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-945836 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-945836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-945836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076482588s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-945836 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-945836 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-945836 --alsologtostderr -v=3: (12.453764908s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-126787 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [214fb7f6-ed52-4505-b372-f712112bd04d] Pending
helpers_test.go:344: "busybox" [214fb7f6-ed52-4505-b372-f712112bd04d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [214fb7f6-ed52-4505-b372-f712112bd04d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003662396s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-126787 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-126787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-126787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047227764s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-126787 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-126787 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-126787 --alsologtostderr -v=3: (12.286691195s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-945836 -n embed-certs-945836
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-945836 -n embed-certs-945836: exit status 7 (86.663419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-945836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-945836 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-945836 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.949149813s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-945836 -n embed-certs-945836
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787: exit status 7 (112.284831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-126787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-126787 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0912 23:31:14.556231 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.562595 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.574119 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.595600 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.637021 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.718392 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:14.879923 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:15.201770 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:15.843657 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:17.125760 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:19.687509 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:24.809674 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:35.051502 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:31:55.533753 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.763212 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.769688 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.781415 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.802830 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.844198 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:27.925561 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:28.087066 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:28.408809 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:29.050594 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:30.332126 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:32.894495 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:36.496350 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:38.016672 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:32:48.258443 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:33:08.740151 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:33:45.253628 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:33:49.701646 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:33:55.443964 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:33:58.418631 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:34:02.182445 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-126787 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m32.246360483s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-99m7p" [d7db724e-974f-41d6-8ec0-4553181cc485] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003343864s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-99m7p" [d7db724e-974f-41d6-8ec0-4553181cc485] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004657898s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-945836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-945836 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-945836 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-945836 --alsologtostderr -v=1: (1.063358928s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-945836 -n embed-certs-945836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-945836 -n embed-certs-945836: exit status 2 (318.946611ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-945836 -n embed-certs-945836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-945836 -n embed-certs-945836: exit status 2 (326.598434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-945836 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-945836 -n embed-certs-945836
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-945836 -n embed-certs-945836
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2zkrn" [fbb44ec0-46cf-4e1f-8042-4e15ebed395c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005640571s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-525059 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-525059 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (48.769985932s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2zkrn" [fbb44ec0-46cf-4e1f-8042-4e15ebed395c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004237928s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-126787 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-126787 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-126787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787: exit status 2 (400.999275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787: exit status 2 (402.565193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-126787 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-126787 -n default-k8s-diff-port-126787
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0912 23:35:11.623348 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m37.864577781s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-525059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-525059 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.406241119s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-525059 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-525059 --alsologtostderr -v=3: (1.319718235s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-525059 -n newest-cni-525059
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-525059 -n newest-cni-525059: exit status 7 (97.173818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-525059 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-525059 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-525059 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.86375771s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-525059 -n newest-cni-525059
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-525059 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-525059 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-525059 -n newest-cni-525059
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-525059 -n newest-cni-525059: exit status 2 (357.275021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-525059 -n newest-cni-525059
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-525059 -n newest-cni-525059: exit status 2 (322.077338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-525059 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-525059 -n newest-cni-525059
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-525059 -n newest-cni-525059
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)
E0912 23:41:14.556970 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.096165 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.102759 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.114098 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.135551 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.177060 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.258616 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.420243 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:39.742174 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:41:40.384384 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0912 23:36:14.556836 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m22.168969137s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-btjlf" [45a846eb-fefc-48ed-8171-0ce732dff288] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:36:42.260768 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/old-k8s-version-011723/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-btjlf" [45a846eb-fefc-48ed-8171-0ce732dff288] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003922323s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.230541071s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7lxh8" [a5427b44-fae4-4d3a-8f14-bc15049c24f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006202416s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6gkm2" [9113e524-3e74-448b-9c79-f4583ba8c3d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:37:27.762952 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/no-preload-693555/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6gkm2" [9113e524-3e74-448b-9c79-f4583ba8c3d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005869153s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.194757665s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7zfm9" [1c84ab50-f13f-4a16-a2b5-d796302160e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005324749s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7p4vm" [a1a849f1-4593-4cef-9654-915857e120f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7p4vm" [a1a849f1-4593-4cef-9654-915857e120f3] Running
E0912 23:38:38.518644 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005283486s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-krm54" [10c11d81-23e7-4a88-a91c-633105d602aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:38:55.443823 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/functional-209375/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-krm54" [10c11d81-23e7-4a88-a91c-633105d602aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.007190919s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.8043605s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0912 23:39:45.158756 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.165225 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.176614 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.198114 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.239579 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.321154 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.482797 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:45.804331 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:46.445735 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:47.727752 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:39:50.289871 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.506750534s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9jplq" [4b7235c9-08d2-4a1b-a3a9-ab401cb1df20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:39:55.411200 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/default-k8s-diff-port-126787/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9jplq" [4b7235c9-08d2-4a1b-a3a9-ab401cb1df20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005649602s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rxml9" [7d1790f0-38a4-4f1a-833e-3396075e770c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008090932s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-541309 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.264902012s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-541309 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lq2kd" [bf99f7d5-0d54-4dbc-9af5-a0eda2c5f279] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lq2kd" [bf99f7d5-0d54-4dbc-9af5-a0eda2c5f279] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006601216s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-541309 "pgrep -a kubelet"
E0912 23:41:41.666155 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-541309 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wkfvj" [037e703e-95cd-4793-99f2-9bf86a8f4167] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 23:41:44.227386 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wkfvj" [037e703e-95cd-4793-99f2-9bf86a8f4167] Running
E0912 23:41:49.349234 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/auto-541309/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004070817s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-541309 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-541309 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-139580 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-139580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-139580
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-490888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-490888
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0912 23:17:05.251901 1597760 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/addons-509957/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: kubenet-541309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:16:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-693262
contexts:
- context:
cluster: pause-693262
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:16:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-693262
name: pause-693262
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-693262
user:
client-certificate: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.crt
client-key: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-541309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-541309"

                                                
                                                
----------------------- debugLogs end: kubenet-541309 [took: 3.989954535s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-541309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-541309
--- SKIP: TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-541309 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-541309" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19616-1592376/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:17:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-693262
contexts:
- context:
cluster: pause-693262
extensions:
- extension:
last-update: Thu, 12 Sep 2024 23:17:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-693262
name: pause-693262
current-context: pause-693262
kind: Config
preferences: {}
users:
- name: pause-693262
user:
client-certificate: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.crt
client-key: /home/jenkins/minikube-integration/19616-1592376/.minikube/profiles/pause-693262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-541309

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-541309" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-541309"

                                                
                                                
----------------------- debugLogs end: cilium-541309 [took: 4.600079889s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-541309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-541309
--- SKIP: TestNetworkPlugins/group/cilium (4.95s)

                                                
                                    
Copied to clipboard